← All talks

Providing Code Execution as a Service

BSides Zagreb 202652:0423 viewsPublished 2026-03Watch on YouTube ↗
Speakers
Tags
About this talk
Running untrusted user code safely on your infrastructure requires careful architectural choices. This talk explores sandboxing approaches—from containers and user namespaces to Kata containers and GVisor—examining specific risks like inter-container networking, kernel escape vectors, and configuration pitfalls. Through practical demonstrations, it shows how small implementation details and runtime selection can eliminate most attack surface while enabling legitimate use cases like interactive learning and API testing.
Show original YouTube description
Presentation: We’ve reached a point where it’s increasingly common to let users write code and run it on your infrastructure. It sounds like a security nightmare, but “code execution as a service” also enables legitimate use cases like feature testing, automation, interactive learning, data processing and ad‑hoc computation. This session breaks down ways to run untrusted code, what can go wrong and what you need to do to keep it contained. We’ll cover several implementation approaches, risks, security implications and non-obvious simple details that can easily eliminate most of the risk. Speaker: Tomislav Turek works in Infobip's Application Security team, which analyzes and performs security reviews of application systems, integrations and code. While mostly focused on application security and software engineering, he likes to tinker with all things related to security. He is an active member of the Croatian capture the flag team 'Phish Paprikaš', with whom he has achieved significant success in information security competitions. Recorded at BSidesZagreb (https://www.bsideszagreb.com/). #cybersecurity #bsides
Show transcript [en]

Rock and roll people, good morning and welcome. My name is Omar. I'll be the moderator for this track for today. There's just a quick intro to the talks and then we'll get cracking. So first of all, I want to thank you all for coming here and welcome to B-Side Zagreb 2026. This is our first annual B-Sides. Last year we had around 300 people. This year we have even more. And thank you all for showing up and coming here. This, as you know, B-Sides is a non-profit, free, community-driven event that we like doing for the community, by the community. There's a lot of volunteers behind this and a lot of people that give their time and effort to make this happen. First of

all, I'd like to thank our speakers. that are here with us today on both tracks, the first one and the second one, for giving their time, expertise, and knowledge, and the willingness to share this with all of us.

Also, events like this can't happen without support from various other organizations. And I would like to thank them as well as part of this intro. And the sponsors for this year's B-Sides are Marlinx Cyber, InfoBip, Avola, King ICT, Trend Micro, Group IB, Infigo, Spun, Contra Digital, Algebra Bernays, Sessionize, B-Sides Zadar, Radiona Axion, Checkpoint Ingram, and Reversing Labs. So, thanks, I mean, they're all here, we're all here because they managed to, you know, fund us so we can have this, and hopefully they'll do this, the same thing next year. And thank you all guys for coming. Now with that, let's get cracking. For our first presentation, we're going to start the topic that sits right at the intersection of developer productivity and security risk, letting users run

their own code on infrastructure you control. Sadly, I'm too old to memorize everything, so I had everything written down. You know, bear with me. Maybe it doesn't look professional, but at least it's efficient. Our first speaker, Tomislav Turek, leads the application security team at InfoWip, where he performs security reviews of applications, integrations, and code. He's also an active CTF competitor with the Croatian team Fishtpaprikas. bringing on hands-on attacker mindset to defensive problems. Today, he'll walk us through what it really means to provide code execution as a service, how to run untrusted code safely, what mistakes organizations make, and the small implementation details that can prevent big security problems. Please welcome Tom Slav.

Okay, thank you. Thank you all for coming to the presentation and thank you for the introduction. So today I'm going to talk about providing code execution as a service. Since I already had a very nice introduction, I can actually skip this one. You've heard it all. So I work for the application security team in InfoBip, focusing on security of our applications and integrations with InfoBip applications. Okay.

let's start immediately with the uh presentation and what i want to do first is give you the motivation behind this presentation and what i want to do with it so providing code execution is not something that's new it's something that you see it today and you've seen it for the past maybe 10 years or even more and it's used in let's say different use cases. For instance, if you want to test a programming language, a feature from a programming language, or if you're doing some kind of interactive learning, providing code execution is basically a baseline nowadays. Furthermore, there are different like tryouts applications where you also have the ability to write code and then this

code is sent to a backend and backend executes this code. So It's nothing new and it's, I would say, like a normal feature nowadays. Even more to say, in Infobit we had a similar feature where we allowed our clients to try out some of the features of the API and this would work in a way where they would write code and this code would be sent and executed by us on the back end. This feature is no longer available, but we had it so we have some prior experience with it. So, this is not something new and what we're seeing today the adoption is increasing especially with the rise of LLM and AI agents and what we've seen

previously and what we see even today there is like a fundamental misunderstanding among engineering teams on the risks and implications of running users code the possible defense mechanisms and maybe the most importantly that different technical approach that is chosen to run users code will impose different risk. So while you as security experts would usually nook this out of orbit, I believe that this can still be done and it can be done in a secure manner. But these options depend on a specific use case. uh and yeah uh what we want to do or what i want to do today is um share what does work or what does not work based on our prior experience

uh also to point out maybe the most important uh slide before we start with the examples uh So what this presentation is about, we're going to be focusing on sandboxing, variants of sandboxing, so different risks, different implications depending on the sandboxing that is chosen. how to provide execution of untrusted third-party code. And we're going to focus on roll your own implementations. So this is not about AI agents, LLMs, personal assistants, or any types of cloud offerings. So we're not going to be talking about you can spin an AWS Lambda or something, and then you execute code with it or through it. So What I want to do next is go through several, let's say, scenarios with different approaches on how to

provide code execution as a service. And we're going to move gradually. So let's start from something that is small and performance to something that is much more big and complex. Okay, so...

Let's say if you're like starting with the project and you don't know anything about running user code, you would usually start Googling, maybe researching stuff, how you can provide this to your users. And you might stumble upon comments on the internet where they state, for instance, for Java, you should use a security manager to execute one's code or Even for Python, you can use an exec function to run code. This is actually true. So you can use these functions to run code, but the safety is kind of questionable. And you'll see that the security level of these kinds of executions is really subpar. Or more precisely, there's no security in these types of executions. So let's start with this kind of

advice to point out the risks and talk about possible mitigations or to show that there are no mitigations. So what we're going to do, we're going to use the Python Flask framework and import lib to execute code. And we've chosen import lib because exec like a function is considered unsafe. And if you tinker with inputlib, you will see that when you're executing user's code, you're getting like a separate module, which is considered like a separate namespace. So it's not running in the context of the main module or any other module that is initiating the execution of user's code. So from that point of view, you could say that the data is separated. So as I said, this is a Flask app. I'll only show

you the most important parts of the code. There is some other, let's say boilerplate code where I'm rendering HTML and handling input, but this is the most important part. So how I'm doing this, I'm writing code supplied by the user into a file and then taking the import lib to load this file and execute it as a module. I just take the output and return it and this output is shown on the web page. So this is how the web page look. It's simple, so there's an input box where I put the code, I click run, I get the output and that's pretty much it. Immediately I can demonstrate that whatever I use, this is considered unsafe and it's kind of logical because

I can run anything, there are no restrictions, import clip does not restrict me to read files from the file system, so Okay, security is not really, really present. So to make things even worse, I want to also demonstrate that the initial, let's say, assumptions that data is not accessible because there's a different module involved when running user's code is not also true. Because if you look at it from a perspective of a computer, you're still executing code within the same binary. and within the same process. So that means that if you're in the same process as another module, you can still access that module. And what I'm demonstrating here is me actually through the SysModule getting the app, so getting the main module and getting the app variable

which holds the definition of my Flask app. And I've just injected a new endpoint which will get rendered once this code is executed when I visit the slash pond endpoint. You can see that I've added the slash pond rule here and I'm rendering the HTML. So the import lib also provides you with no separation, although at first one might think that there is some kind of separation. Okay, so let's move this a little bit further and let's introduce some kind of prevention. What we're doing here, we're actually moving built-in functions and we're only allowing print functions. So this means that for this module, I will only allow execution of the print function. And of course, there are some baseline functions present in each module, which you

actually cannot remove because they handle how the program behaves. And this is the example after I've added some restrictions that I can still use this print function. to access the main module. And why is that? Because this print essentially in Python is not a function, it's actually a printer class. So I can use reflection to obtain the main module and then access the URL map or mingle with it. Okay, so let's take this a little bit further and remove the built-ins completely. So now user is essentially not able even to execute any type of printing. But, okay, let's take a few seconds, allow printing just to show you that the base classes, so the base classes that I've mentioned previously are the classes that are

always imported. And there's one class called quitter. And this quitter class actually uses the sys module. So it imports and

uses it in its call function because the quitter class which is a base class actually uses the sys module i can use the quitter class and then reflection to actually access the sys module and mingle with url map so this means that any type of prevention that i tried to implement doesn't work in my case so to try to conclude this approach whichever type of programming language that you use that um essentially directly executes user code is considered unsafe and whatever you do the main process will always be tightly coupled so this means that uh you're severely impacting security or you're having no security but on the other hand performance wise you're having a really

good time because this is the thinnest approach and this is a very low performance impact. Okay, so let's go to the next logical step. I said we're going to go gradually. So let's actually introduce sandboxing to this web app. Okay, so...

If we look at this from the side of performance, we want to move to the next lightweight approach and this would actually be using some embedded filters such as Seccomp, AppArmor and SE Linux. So I'll use an example, I'll just use one and I'll use Seccomp to filter system calls. And this is how the code looks. This is in C, just for an example. So what I'm doing here, I'm defining a denialist, so I'm not allowing these OS system calls. And I have a definition that I will allow everything else. So this means if anybody is calling a function or a system call directly, such as fork or kill or execute, this will not execute because SecComp

will not allow this. I added the denialist into my ruleset, loaded the ruleset, and that's pretty much it. So anything that, any code that's running while this Seccomp ruleset is loaded should not allow these system calls. So this is good. This will limit untrusted code and this works. But the problem with my approach here is that not everything is limited. If you look at it from a perspective what Seccomp gives you, it will limit you system calls. This means that any type of file system, memory access, resources are not essentially limited. You also have to take into account that having a sandboxing approach such as this, this is very low level. So if you want to have adequate sandboxing, you would usually combine SegComp with other tools such as

AppArmor because AppArmor would give you an additional capabilities in restricting code. Furthermore, to reference my example, this is still vulnerable, although I have a denialist because maybe something can slip through. And I have an example where my denialist is actually not complete. This is an example C that's also reading the file from the file system, and that's because I did not restrict an open syscall and a read syscall. And of course this works. So if I want to make things secure, I would have to change my approach to an allow list. But then writing policies becomes complex. There are numerous issues which you will stumble upon eventually because you cannot always be sure that you have

properly scoped your list and you cannot be sure that nothing will break. Also, if you're combining stuff, if you use SecComp and AppArmor in combination, you have multiple stuff that you have to orchestrate, and then you might have also multiple points where your sandboxing may fail. But we're in luck, so there are smart folks that bundle all of these that i mentioned into a single package and this single package is called containers so let's move on to the let's say next level and briefly uh look at sandboxing from a container perspective so um i believe everyone more or less knows nowadays what containers are i'll just recap that this is a feature inside the so feature so a tool

that will allow you to isolate applications among themselves because you can get a separate namespace for each app. So let's say in a way apps cannot communicate with each other, but it also depends on the gory details and how they can communicate. But let's say essentially it's designed to not be able for the apps to communicate between each other. so containers in essence are utilizing multiple components which i mentioned previously and they bundle seccomp app armor c groups namespaces and capabilities with some same defaults so that you're actually getting sandboxing just by running a single container one could say that this is like a set it and forget it feature, but you will see that this is

not really true because in let's say 90% of cases, let's say use cases in business, you cannot just use it as is. You have to think about the details for that specific architecture. And we will use our application that I previously shown to go through an imaginary scenario and to show you some of the risks that come with running containers by default without making any changes. One note also here is to state that if we're executing user code from multiple users, you would you would never use a single container to run every code inside that single container because you're not getting any types of boundaries. There is a boundary between the main app and that single container, but between

users, you're actually melting these boundaries and users can access another user's code and data. So we must use a single container for each user code. Okay, we'll go through the imaginary scenario. This scenario is 100% completely imaginary, although I did use some of the things that I've seen in prior work, so it might resemble some stuff from there, but this is kind of intentionally because this is where we've seen that there is fundamental misunderstanding on how to provide sandboxing. Okay, so this is our baseline architecture for the code execution with our app. So we have a proxy in front this time, and this proxy will actually forward all of the requests to the app that executes Python code. And the app then spawns for each

code a single container that runs the code and we utilize Docker or container logs to obtain the output and push it back to the user. I want to focus a bit more on these two components for you to better understand the architecture and to understand the risks that we will demonstrate. So let's look at these two in a bit more detail. So proxy has several ports open, the most important one being port 8000, which uses HTTP and can get outside requests. Other ports are not exposed, so you cannot ping 8,001 port from the outside. Everything that comes to the port 8,000 will get forwarded to the port 5,000 where my app container is and my app

is living. And this app container is enabling a debug console on this web application that allows only 1, 2, 7, 0, 0, 1. So it's only available for local host. The VM that's running this container network or container set exposes the Docker Unix socket to the application container because we have to enable creation of containers per request. So the idea, general idea is that outside requests will go through the proxy to ping the application and developers if they're having any issues with the application, they can always use SSH to attach to the debug console on the web app and then debug stuff or whatever they need. Also to point out maybe the most important feature is that the application needs the

real client IP to be aware of its users. So what proxy does, it adds the X forwarded for header to the request and attaches the real client IP coming from the outside so that the application is aware of the IP address. For security reasons, XFF header cannot go through the proxy. The proxy will not allow outside requests to attach an X forwarded 4 header because that might, well, prove insecure since we're exposing the debug console on the application container. Okay, so this is the most relevant code from the application. What I'm doing here, every container has a single UID attached so that every user has its own code executed.

we're creating a temporary directory for each code and writing the code inside the file. And for the container that we're running, we're running a default Python image, we're running a Python 3 binary on the code, and we're providing code through a container volume, so nothing revolutionary. We're using container logs to get the output of the code that is being executed and providing it back to the user through the web UI. I've also added this so that just emphasize once more. So this is a debug application, so allows only 127001 and there's a proxy fix so that the application is aware of the XFF header. Okay.

let's put the application so once the application is booted you may see that it's listening on two interfaces one is the local hosting interface is expected and the second one is a 172 1903 IP which is the subnet of the container network so this is not exposed to the public in any way and on the I can go back on the container network. The 172 1902 is actually the proxy that's in front. This is me just testing the app. So you can see that this works on app.local support 8000. So this is me going through the proxy. And if I'm going through the proxy to try to attach to the console, this will not work. So it returns the bad request because the request

is denied due to me not having the local host IP address. Okay, so what I want to go through now is also touch a little bit on the risks and maybe the number one most important thing in container networks. So the default network behavior with containers is once you're running inside the container network, there is no restriction on the network level. This means that if you're running code on a container number three inside the container network, you're able to communicate with any network inside the container network. Furthermore, if you're making requests to the outside from this network, this network request will have your IP as in your infrastructure IP, meaning that any type of IP whitelisting that you might have employed is not working and it's actually melting any

type of, let's say, L3 authentication.

Okay, so let's move on to a more malicious scenario now. Since everything is working, we can actually execute a reverse shell on the container that is being ran inside the container network. So I'm running the reverse shell here, and you can see that I'm obtaining connection from the container itself. Immediately, you might see, since we're running a default Python image, we're getting root privilege on the container, meaning that we can install what we want on the container, mingle with system files, configuration files. So basically this container is completely ours.

We can also do even more from a network perspective and I want to demonstrate you a simple technique on how you can become aware of other containers without using specifically, for instance, Nmap. So since we're in a container network, every container has gets like a host name and also an IP address. And in Kubernetes, this is more or less as a default. And in Docker, the Docker container networks, it's a little bit different because you actually have to create a network for this to work. But if you're inside a container, you can just obtain the container IP by listing the interfaces and then use DNS reverse lookups to obtain the host names of containers so that you're actually aware

which container is responsible for which operation may be from the name of the container. For instance, in my case, you can immediately see what is the 1721902 IP address and what is the 03 IP address. So these might be the most important for my architecture.

Okay, so what we actually did, we have obtained full access on the container. So what this means is that there is no longer any proxy in between. And if there is no proxy in between, you can directly communicate with the container inside the container network and attach an XFF header of your liking. If you attach an XFF header with a localhost IP address, you should be able to obtain a debug console. Even more, you can forward stuff by using SSH forwarding to an external server and then use an external server to change the XFF header and obtain access or obtain console access on the app. So I've also demonstrated this. I've used SSH forwarding to open on a remote, a port 1990. And this is for the 2D 172

1903 port 5000, which is my app. So I can easily use localhost to access on the server to access the console. Since I have the ability to access the console, I also have the capability to execute reverse shell on the console. So if I do this, this actually means that I'm sitting on the app container now. So this is also a form of lateral movement for me. So this is where we are now. We've successfully escalated from the container executing user code to the app container. And since we're sitting on the app container, there's this, there's a Unix socket exposed that allows the app to

create containers and since this is the case, I can easily attack this Docker Unix socket by installing Docker CLI and then communicate with this socket from within the app container to run privileged containers. If I'm running privileged containers, this has implications on me being able to jump on the host. So what I did here just to demonstrate is I've mounted the root file system of the host and in the middle point here, sorry, I've exited the privileged Docker container so that you can see that the first part is the host file system and the second part is the container file system. When I've exited, I went back to the application container and also the home folder. You can see from the home folder, the host file system, so the

VM user, inside the virtual machine and the home folder, which is empty inside the container. Okay, so to recap, as a user, we've ran a shell, told the app to run a shell, and this shell was used to add the XFF header, so to show myself as I'm coming from the local host, giving me the ability to attach to the debug console. And from the debug console, I've escalated to the app container to communicate with the Docker socket to run a privileged container that allows me to break out from the virtual machine. So a simple example of an architecture where all defaults are used and I am actually completely capable to break out from the complete container system to the virtual machine.

Okay, so now let's move from these mentioned risks to let's say mitigation, or let's talk about what we can do better. And you will actually now see that sandboxing is not really that easy. So I already mentioned maybe the number one issue that I wouldn't say nobody thinks about that, but it is something that we see most often is container networking. So in containers, the network is always open. And in these cases, if you're making requests from a container to the outside, you will get the infrastructure IP, public IP. This is easily mitigated. So for instance, in my architecture where Docker is used, I can just use a configuration ICC false, which is the inter-contained container communication,

which enables that containers cannot communicate between each other. So I'm also demonstrating this here. So you can see by running two containers with this configuration set, I'm not able to communicate with the container directly, so the operation times out. You have to be careful because even if this configuration is set, if you're publishing a port, this port will be available to the containers. So by using any type of direct communication between containers, this will not work with the ICC false configuration. But since the port is published, you can go to the host IP, which is actually 172.17.0.1, and communicate with the published port because the host will open this port and forward the requests to that container. If you want to make this secure, you would need to

employ some kind of firewalling here. Second of all, you've seen me accessing the container and immediately having root user. This was because I was using a default image.

Sorry. So you can use, so for this to be resolved, you can use some ready-made secure images such as distroless or Docker hardened images. And to disable any type of package installation, you should disable APT or other package managers. Okay, now let's move on to some good hygiene stuff. So in my code, there were a lot of parameters missing. So this was the initial code. We did not do much. We used the default image. We just ran, so mounted the volume and ran the code. There are also good things that you can employ here, such as the read-only file system, dropping capabilities, setting security options to not allow new privileges, and then restricting temporary file system. This is mostly dependent on your use case. You

might use all of these, you might use only a portion of these. It depends. We're still not done. So, from a point of view of what we have already addressed there are still issues in containers you also have to think about resource consumption and capabilities of denial of service so if one has the ability to access your infrastructure through a container and you did not limit any resources one can simply mine that's also enough and imposes risk so you would also need to put some kind of restrictions on resources, and ideally perform a timeout for any user code that is being ran. So if you're expecting that no user code should execute for longer than 10 minutes, then

you put 10 minute timeout. If in 10 minutes the container does not exit by itself, you exit it forcefully through the backend. We're still not done. So if you would come to us for a review, we would also tell you that there are some nice architectural changes that you can make to the initial architecture. So this was the initial architecture. we would also suggest to isolate into two separate container networks and to move the unix socket so unix socket ideally should not reside in the container which holds the application rather you would separate the unix socket on to a single responsibility container and this container would expose an http api which allows you to execute only certain uh

operations on the unix socket uh furthermore inside the container network you have more capabilities because you can now restrict container so intercontainer communication inside that network but also you can restrict that it cannot go to the to the outside i don't have a line i have it down here so you can also restrict that it never goes to the outside it's just used for executing code we're still not done There's still one thing that remains and that's the kernel surface. So from a perspective of containers, they're still running on the host kernel. So this is still a possibility to be abused, to escape from the container and do something malicious. It's not something that's maybe in focus, but there were successes in the past. There

are CVEs that are demonstrating these types of vulnerabilities. So We also have to go a little bit beyond containers and show some examples on what can be used to also cover the kernel surface. So first of all, you can employ as, let's say, a reduction of risks, the user namespaces. So this means that by making a configuration change, you can this is the flag in the Docker info when the configuration change is made, you can run containers on a different UID from a UID zero. In that way, the kernel surface is reduced, but there are problems with these username spaces, at least what I've seen previously, there might be some restrictions and maybe some features

will not work. So also depends on the use cases, but it's getting better as time progresses. VMs are also an option here, but they're usually above the cost benefit threshold because there is a severe performance impact on utilizing VMs. You can see that on boot, so it requires a few seconds to boot on a performance machine. There is an alternative to utilize micro VMs. For instance, I'm demonstrating here Incus. This Incus can also be a firecracker. So these micro VMs are minimalist VMs. They don't have a lot of other support and they're focused on short-lived instances. So they boot, execute, and then exit. Maybe... The best approach here would be to utilize a ready-made special container runtime called Kata

containers. Kata containers are a little bit different because they have this hardware virtualization technology as second layer. And so you get basically a dedicated kernel for the container that you're running. A simple demonstration how this looks. So for instance, on the host where I'm running containers, this is the kernel version. And you can see the same kernel version inside the container when you run the container. But if you change the runtime to Kata containers, you can see that this is a little bit newer Linux kernel. But just to demonstrate that this is a different kernel from the host kernel that I have.

Also, one other good alternative is to utilize application-level kernels. For instance, GVisor is one of these application-level kernels. So this works in a way that there's a core element inside the GVisor called Sentry that intercepts system calls and then tries to handle them inside user space if there is an implementation for it.

It will do its best to prevent touching the host kernel and you won't see that much impact on the performance. So for instance, in GVisor, TCP stack or network stack is completely implemented in user space. So any type of networking system calls or networking related system calls will never touch the kernel while memory management or process exiting or something similar to that will have to touch the kernel because you're exiting the process you have to tell the kernel to exit the process also the same thing that i've demonstrated with kata containers so this is my host kernel and if i'm using the gvisor runtime it looks a little bit dramatic but this version does not really mean anything it's

what Gvisor was, I guess, based on. Since it's an application level kernel, it will try to handle the implementation within Gvisor. If it cannot handle the implementation, it will just forward it to the host kernel. So the version 4.4.0 doesn't mean much. Okay. So let's conclude with what we have said. some of the future expectations that at least I have for sandboxing or providing code execution as a service. So I mentioned at the beginning, interactive learning, all of these features, they will not disappear. So they will stay... they will maybe even more amplify even more. So looking into the future, I would say that with all of these LLM stuff, this is gonna get even much worse. So

lately we're seeing AI chatbots, model tools, personal AI assistants, agents, agentic, whatnot, they're all providing code execution. And this kind of providing code execution also requires, I would say sandboxing. So, this knowledge will be i would say of of great use in the future as well uh just to also demonstrate what i've just said regarding the llm era so for instance in in open web ui there are tools they allow you to execute python code on the back end uh there are code interpreter there is a code interpreter api in libre chat there's an mcp server or numerous MCP servers that allow code execution and may be the most notorious for the past month. So

OpenClaw also allows you to execute code. That's it.

Thank you, Tomislav. Informative as usual. Any questions?

No questions? I have questions. Okay. So in your experience, what's the most common mistakes organizations make when sandboxing user code? Well, I did kind of already stated that in the presentation, but I can also repeat it. I would say that the networking part is maybe the number one thing that's most revealing to users. the developers so i'm not sure if they're not aware or they're just not not thinking about the networking segment but no one so everyone kind of expects that uh if you're using containers and the user code is sandboxed it's sandboxed it works it should be fine right but there are these risks so so i would say primarily network network segment in your opinion that's if they

want to implement one security like guardrail on such things, networking is the first thing to take care of. Yes. Yes. Okay. Any more questions? I'm sorry for my voice. What's your opinion about using the Linux containers in an infrastructure like this? Like performance wise and security wise, they have a lot of configuration options and so on. So

My perspective of usage of Linux containers is that this is the most optimal thing that you can use in terms of sandboxing. Because, okay, it has some kind of impact on the performance, but it's low enough so that it's in a good relation with security benefit that you get. So as you've seen, using containers... just by using them is a great choice but there are because there are a lot of parameters you have to think about these details depending on the use case that you're having so there is nothing else that i would suggest that you should use uh and that this is not actually containers so My first thing to suggest, if you want some kind of form of sandboxing, use

the containers. Just be mindful of the parameters. And of course, if you can use the virtualization layer as well, that's great.

Hi. Can the Docker in Docker can be part of a solution? or is it just transferring the problem on another level? Can you just repeat which? The int. The int, like docker in docker.

Well, so I would say that using... So I don't have experience in production by using actually Docker in Docker. I can build the Orchid in isolation. So as far as my understanding goes, using Docker in production systems is not something that's usually suggested because there are restrictions. You don't have the same visibility as a container network inside a container as you would have like a container network by itself on that infrastructure. Usually, my guess is because it requires a lot of maintenance. And if you want to allow some kind of networking system to that container network you still have to mingle with the initial layer of the container and usually people don't think about these layers in that way so they they for let's say simplicity sake they usually

do not go uh two layers deep i don't have me personally i don't have any type of uh let's say experience in production with containers in containers. I did test them out for testing purposes. And my understanding is that it's usually used for testing. From a security perspective, I would say it's better because you have the two layers. The only thing is if your container, initial container is also misconfigured, then these two layers do not mean much. So you also have to think about configuration of the initial and the second layer container. I hope that answers our question. Anyone else?

Hi. So from the simplifying perspective, are there any recommended sources for pre-hardened Docker images that are already ready for use for production that have most of the things already configured or is this like a false advertising and then you just use it but you still have a lot of security holes even though they are called secure images? So I know of I believe three resources of secure images. One can be there are Docker hardened images like the official Docker images but hardened. There's also the Bitnami set of images that are also considered like secure images. And there's also that I know are distro-less images. I don't have any experience with the Docker hardened images. I did experiment with Bitnami images.

I have seen some in production as well. And from my perspective and from the testing that we have performed, I would say that these images used as default are in good balance with security. My personal choice would always be distroless because distroless removes a lot of stuff from the container. Only the baseline stuff is inside the container that is needed and usually only the app is running. So you actually cannot even obtain a shell. So would be my suggestion this is what i have the most experience with but from a few cases where i've seen also other providers i would say that they're also good in terms of security well thank you tomislav as always

that's our time tomislav will be here all day so you can catch him ask him more questions bug him annoy him know and uh please use him he's here to share his experience so use him and abuse him thank you very much see you at 10.