← All talks

Rob Richardson - Service Mess to Service Mesh

BSides Knoxville47:0454 viewsPublished 2021-05Watch on YouTube ↗
About this talk
Service Mess to Service Mesh by Rob Richardson Recorded on April 28th, 2021 for the 7th annual BSides Knoxville conference. In our quest to secure all the things, do we jump in too quickly? We'll use Istio and Linkerd as example service meshes, and look at the features we would expect from a service mesh. You'll leave with a concrete understanding of the service mesh, and be ready to justify the investment. Rob Richardson is a software craftsman building web properties in ASP.NET and Node, React and Vue. He’s a Microsoft MVP, published author, frequent speaker at conferences, user groups, and community events, and a diligent teacher and student of high quality software development. You can find this and other talks on https://robrich.org/presentations and follow him on twitter at @rob_rich.
Show transcript [en]

this is so much fun i am so honored that i get to speak and you know more than five stars that's that's pretty cool um we're going to talk about a service mesh but before we do so um here's the part where i tell you i am definitely gonna post the slides on my site tonight you're going to hit um uh you'll hit my site tonight tomorrow i've been that guy chasing the speaker as well and it never worked out very well which is why you can go to robrich.org right now and click on presentations here's service mess to service mesh actually let me cruise up to the uh besides knoxville service mess to service mesh here is the

slides they are online right now achievement unlocked okay while we're here on robrich.org let's click on about me and we'll talk about some of the things that i've done recently i'm a microsoft mvp a friend of redgate a single store developer evangelist and let me tell you about az gift camp easy gift camp brings volunteer developers together with charities to build free software we start building software friday after work sunday afternoon we deliver that completed software to the charities sleep is optional caffeine provided if you're in phoenix come join us for the next a-z gift camp or if you'd like a gift camp here in knoxville or wherever you're connecting from hit me up on email and twitter and let's

get a gift camp installed in your neighborhood too some of the other things that i've done sql source control basics minus chapter eight that was really fun and one of the things i'm particularly proud of i replied to a dot-net rocks podcast episode they read my comment on the air they sent me a mug and if you'd like it no i'm just kidding service mess to service mesh let's dig in and start talking about what is a service mesh why might i try to use it but first do you remember when you learned how to drive do you remember that freedom when you expanded beyond your current neighborhood and you were able to travel across town or maybe between towns

you know how fun it is to be able to drive on an open road so let's imagine we're in a small town and we're able to drive as fast as we need to to get where we need to go and it's really really fun well in time the town grows up and we start having traffic jams hmm so how do we solve this how do we get well i've looked at this graphic and i'm like well so i think that uh this guy needs to turn maybe this bus can uh this guy might have the right idea maybe if we can yeah not solve this but how do we solve traffic well okay let's put a traffic cop at the edge

of town and he'll be able to monitor it anybody who's speeding on the way into town he's gonna slow him down now that's great but we're not getting to where we need to as efficiently as possible now instead we're getting to uniformity we're all traveling the same speed so those people who want to travel slower or faster are actually getting a less effective experience really what we want is a mechanism where the cars can communicate with each other where we can prioritize traffic those cars that want to go really fast or emergency vehicles can use expedited experiences to get where they need to go fast and those cars who would rather enjoy the scenery and meander

can definitely do so when we're working together we can ensure that we get as efficiently as possible rather than just focusing on uniformity let's use those same principles as we start looking at a service mesh we'll take a look at what is a service mesh why would i want it what other things exist in this space what problems are we trying to solve we'll look at both istio and linker d both great service meshes and finally kind of look at what are the trade-offs why would i want to choose a service mesh or not choose a service mesh let's dig in so first up a service mesh it manages network traffic between services inside of a kubernetes cluster

it's a great answer to the question how do i observe secure and control the traffic in my kubernetes cluster now as we dig in here let's look at observe secure and control now service mesh is going to watch the traffic moving between each microservice as we look at the traffic moving between the services we can understand well what is the network topology let's level up and talk about control once we've identified which containers call which other containers we can say you know maybe it shouldn't maybe we have a microservice reaching into somebody else's database let's create policies to be able to ensure that only those approved services are able to communicate together and if we start a rogue container

well it's locked out of everything and finally secure let's create mutual tls between our services now typically when we talk about mutual tls we talk about a trust chain we talk about verifying the trust chain and we talk about all of the weight that we need to add to our applications what if we could do that outside of our applications in the service mesh instead but still achieve that level of security observe control and secure that's what a service mesh will give us on the left we have a monolith on the right we have microservices now why did we start out with monolith well deployment was hard and we chose a monolith so that we could

be able to get all of our content deployed in one big bundle because well deployment was hard on the upside all of our private network call all of our private method calls are just local to our executable they're just memory addresses as we move towards microservices we can scale each service independently we can deploy much more frequently containers have made it possible for us to achieve this really elegant distributed system so here on the right we have microservices and well now that we're deploying quickly we get great scale but well our internal methods now have ip addresses do we really want to call in to each method straight away as we look at the traffic flowing

through our kubernetes cluster we'll look at two different kinds of traffic we'll look at northwest north-south traffic and east-west traffic north-south traffic flows into or out of our kubernetes cluster where east-west traffic flows between the microservices within our cluster between the containers now the cool part is that a service mesh can handle both north-south traffic and east-west traffic so what came before well before we had a service mesh when we just had a kubernetes cluster we wanted that mechanism that well the traffic cop at the edge of town let's find all of the road traffic coming into our cluster and ensure that it doesn't get through that api gateway is a fence around our cluster

it's great at being able to stop north-south traffic but it's only a fence around our cluster what about that east-west traffic now in this case notice how this micro service that owns this data source is actually reaching into the other data source now this microservice owns that data source and we shouldn't be able to call into it the api gateway can't stop us here the api gateway is merely a fence around our cluster now it's great for monetization and accounting let's count the number of requests that you make and be able to bill you for those requests or ensure that you're authenticated before you enter but once you're inside the cluster that east-west traffic an api gateway really can't help us with

that so let's level up how does a service mesh work let's create a service a and service a wants to connect to service b now without a service mesh service a would just open a network connection connect to service b and call it with the service mesh it's a little bit more inside the service mesh we will deploy a sidecar proxy along with each container inside the same pod so here's a local network boundary that includes both the service and the sidecar proxy so service a wants to communicate with service b service a will reach out to its proxy now the proxy is going to go check in with the service mesh control plane am i allowed to call service b in this

case the control plane says yes and so service a creates a connection to service b's sidecar proxy service b sidecar proxy again checks in with the service mesh am i allowed to accept connections from service a in this case the service mesh says yes and it forwards that traffic onto service b service b processes the result and the result flows back to service a through that service mesh now what's beautiful here is that the proxy is able to connect to the other proxy using mutual tls so we'll use a certificate on this side and a certificate on this side and the trust chain goes through that service mesh the service mesh can define that trust

chain and ensure that all of that traffic is encrypted pardon me so we have this elegant mechanism where service a can go through its proxy validate with the service mesh call out to service b's proxy validate with service mesh again and forward the traffic to service b this pod boundary ensures that only local traffic is able to proceed unencrypted and all of the traffic between containers is encrypted with mutual tls now that's wonderful as we look at this methodology we now have the ability to watch all of the traffic flowing between our services let's observe that traffic now as we notice that traffic pattern we can start to control that traffic should we be able

to call that other service should we be able to accept traffic from that other service if not then we can block that traffic and then finally secure we created mutual tls between these two services without impacting our code base that is immensely powerful just create your services using http or grpc or protobuf and we'll be able to secure that over this https connection and flow that traffic through observe control and secure now what's really cool is the service mesh is not just that first level of being able to proxy traffic between things but we can also level up and get some advanced features around this experience because we're proxying traffic between all the services we can do

things like network topology diagrams now what's interesting is that we're watching the traffic flow between services this is not what the developer thought would happen this is what's actually happening now that's really powerful because well we can compare that to the developer's view of the architecture and see hey did you hard code that configuration value to point to the wrong database we can see that based on the network topology diagram we can take a look at service health we're looking at all of the requests and responses so we can keep track of status codes and response times is this service responding much more slowly than it did is this service throwing 500 errors more than it usually does

finally we can log all of that content so that we can start to observe our cluster much more closely is this usual behavior does it often spike on monday at 3 pm observe secure and control gives us the ability to create these advanced scenarios let's level up again while we're observing securing and controlling the traffic we can create different usage scenarios for example let's create an a b test where we'll send fifty percent of our traffic here and fifty percent of our traffic there now this uh service mesh this sidecar proxy can go check in with the service mesh control plane and understand which service it should point this request to so it's easy for the control plane to

say well i'm going to send you this direction or that direction we have an a b test or we can create a beta channel let's create a mechanism where early access customers can use the beta versions of our software get early access to features help us understand the health of those systems and once we have a little level of confidence here we can roll it out to the main audience together we can create circuit breakers now it's really easy when a dependent service is struggling for us to well the first thing we're going to do is retry those connections which ensures that we bury the service if there's a service that's teetering on the edge and the client start retrying

all the connections it's gonna go down so we can trip the circuit breaker once the circuit breaker is tripped then we'll just fail all those requests straight away we can give the service a chance to regain health to start back up to initialize itself and get to a healthy state and then we can start slowly flowing that traffic back in now unlike a circuit breaker where you have to go crawl around to the panel and push the button with a circuit breaker inside of a service mesh we can automatically turn it back on when the service is healthy now this is really cool circuit breaker a b testing beta channel because we're standing in between all of

the service connections we can create these advanced network traffic patterns we also have great dashboards associated with a service like this so here's a grafana dashboard and a keali dashboard let's dig into those any questions before we dig into uh demos

rob i'm not seeing any questions perfect well on our way into demos let's take a look at one of these things as we look at a service mesh we want to make sure that we don't have unexpected traffic patterns because we have those sidecar proxies then this microservice is never going to reach out to the other database now we do need to work with our developers to ensure that they don't code their application that way but that's one of the benefits here of a service mesh is routing traffic correctly so now let's look at various service meshes now we're going to demo istio and link rd today but let's take a look at lots of service

meshes istio linker d console open service mesh dapper maybe and let's take a look at their characteristics now we could look at their feature sets but they're always one-upping each other so for the most part the feature sets are pretty consistent across them so why would i choose one over the other let's instead look at methodologies associated with each service mesh the comparison between the methodology of istio versus linker d so first up linker d now linker d is great at being able to build out their own ecosystem they're a great contributor to the rust network stack because they kind of build all the things themselves they do include a few external packages but for the most part they focus on a

really elegant startup user experience now that's great we get a great startup user experience but if you want to climb outside the standard path you'll need to use some plugins now they kind of go back and forth about are they going to use envoy as their proxy or their own built-in proxy the difference between linker one and linker do linker two for the most part is that are they using their own proxy or an external uh proxy but it still has that same methodology where we end up with a sidecar proxy that checks in with the service mesh control plane to do what it needs to do by comparison where linker d focused on having a really smooth startup

experience and writing everything themselves istio focuses on combining the best open source packages together so with istio we have profiles where we can say here's the profile i want to focus on minimum install or i want to focus on ensuring mutual tls or in the one that we'll use today the demo profile where we have everything turned on with everything turned on we have access to lots of dashboards from third-party utilities jager and grafana and all kinds of really interesting things with istio we can turn those profiles on and off to be able to enable features or we can get in and turn specific features on and off istio is more of the kitchen sink

approach where linker d is more of the lightweight approach and ultimately they both have sidecar proxies they both have mechanisms for checking in with the control plane you can observe secure and control using either one so let's demo both of them because this will be really fun i have a cluster right here a link or d cluster let's use that and let's fire up the linker d install experience linker d viz install cube ctl apply dash f dash now i've taken a few liberties here as we go to linker d's getting started page we'll start off by grabbing the let's double check that where kubernetes cluster is running then we'll go grab the linker d

cli once we've got the link or dcli we can verify its version and then we run linker d check dash dash pre that will validate that link or d will install correctly on our cluster next up we'll go grab linker d install pipe it to cube ctl apply dash f and so basically that will spit out all of the custom resource definitions the crds associated with linker d i've done that previously next up we'll do a linker d check and make sure all of those things are in place and finally we can install some visualization dependencies linker d vis install and so that's what i've done just now linker d vis install and we've got all

of those things created next we can do a linker d check and this will go validate all of the pieces of our cluster so it looks like our cluster is ready to go that was great the install experience with linker d is really smooth and that linker d check is really elegant at being able to find the details of our cluster now that we've got linkedin started linker d uh viz dashboard let's take a look at uh helps if i spell link or d correctly link or d vis dashboard up pops the linker d dashboard now we can take a look at our namespace let's look at the linkerd namespace and with each of these deployments we

can take a look at the details associated with that deployment now in this case the deployment hasn't been running very very long so there isn't much there but if we dig into for example this container we can fire up the grafana dashboard for this particular deployment here's some metrics around our cluster to be able to see how it's running yep that's the point where i turned it back on and now we get a grafana dashboard showing us the details of this cluster now what we notice is that grafana is pretty much the only external package that we have visual visible here this dashboard is for the most part all linker d created now we don't

necessarily need to click through everything to get at it let's come back to our console and let's say linker d vis stat and i'll do it for the linker d namespace and let's take a look at the deployments no traffic found link because i spelled link or d incorrectly again there we go so we get those same statistics from the command line we could use this to scrape those details or we could hook up a prometheus sync to be able to grab the metrics out of our control plane and push them into another dashboard linker d focused on getting that install experience going really quickly let's turn off linker d and let's instead focus now on istio

so i actually have two different clusters running an istio cluster and a linker d cluster so flipping over to the istio cluster let's take a look at istio now one of the things that's interesting about istio is the in the getting started experience is quite sim similar i grab the istio command line i can um check out the istio version when i start to install istio i'm going to first pick my profile in this case it's the demo profile which is the everything on profile and then i can start to add annotations to my particular namespace so let's do that let's annotate the default namespace with the istio injection equals enabled tag now that means that any of oh

i've already got it set any of the pods scheduled inside this namespace will get that sidecar proxy do i want to exempt the namespace and not control it i can just leave this tag off of that namespace that's perfect once i've got this running through they do have a great sample application that i've already got installed let's take a look at that sample application and let's go grab the port that it's running on cube ctl get service istio ingress gateway in the istio system namespace and it looks like we're going to look on port 31 100. so if i load this application here on 31 100 i will get at a product page now the interesting thing

about this product page is there's a bunch of microservices happening there's a microservice for book details there's a microservice for book reviews and there's a microservice for the number of stars that we should show here's a network diagram of that application here's our product page and it'll call out into the details service to get that content here on the left and it'll call out to the reviews service to get the content here on the right now depending on which version of the review service that we use we might get stars colored in black stars colored in red or no stars for those that show stars we'll call our rating service to get the number of stars that

we should show so right now it's set to equally go between those three ones here's uh stars in black here's no stars if we hit refresh enough we'll probably get stars that are red no there we go stars that are red so we have those three versions running simultaneously now in a normal scenario we would probably only run one version at a time but in this case having all three versions will allow us to highlight in istio the way that we can transition between versions really seamlessly here i have a list of versions so here's that virtual service that i have routed and right now it goes equally between these three destinations so let's switch over instead to virtual

service reviews version one now i at this point let's assume that we've installed version one of our service and we want to cut over all of that traffic to just this service okay now we've gone to version one we push refresh on our page we'll see that we now no longer have stars at all okay so i've readied my deployment i've built my container and i'm ready to cut over to version two but i'm not quite sure about it let's see do we have enough content in place do we have the right validation parameters in place will our content scale so let's come in here and say um let's go between our service 80 to the old service and 20 to the new

service or we could even take a much more conservative approach and say 90 percent to the old service and 10 percent to the new service let's just take a small portion of traffic and move it to the new one copy that cube ctl apply minus f that one now there's no downtime because both of our containers are already running we're just telling the control plane hey when the sidecar proxy checks in and says where is the rating service point them at this one instead okay so back on our page now most of the time we will get no stars but every now and again we'll get stars colored in black there we go now we may

choose to use sticky sessions or another mechanism to be able to transition users over to the new version but ultimately once we've decided that this version is good let's flip all the way over to v2 now with v2 service we can see that we go all the way to version two no downtime we're just pointing the service discovery piece and now we always have stars colored in black okay so now let's get ready to do version three now here i have version three but i want to do a canary release i want to have a beta channel so in this case for the user that is authenticated as json then i'll give them version 3.

for everyone else i'll give them version 2. now we might want to do this based on an authentication header or another mechanism cube ctl apply dash f so now we've got the majority of our users using version two i'm going to consistently get the stars colored in black but now let's sign in to our site i'm going to sign in as jason and log in now that i'm authenticated i get stars colored in red i'm able to check out that beta channel and understand the content in this system let's log out again and now we're back to stars colored in black we're back to version two okay i've fully tested my application i'm ready to go to version three and so

let's flip over completely to version three now here with version three um we see that we've got a hundred percent uh scroll okay let's go this way so now with version three we see that we have a hundred percent of our traffic moving to that new version that's great now let's refresh it and we will always get the red stars that was really cool we got to see an upgrade going from version one to version two to version three with no downtime at all using just the intelligent routing built into istio now let's come back here and let's do the 30 30 30 version rename now typically we would have only one of the services or maybe two of the

services running at a time but it's fun to be able to play with it so here's that third third third and so we can see that we get uh sometimes red stars and sometimes black stars and sometimes no stars that was cool let's dig into some of the dashboards that come with istio now if i'm looking at these dashboards i can say istio ctl dashboard dash dashboard typing is hard prometheus now this cluster was just started a moment ago so we probably don't have a whole lot of details here but let's take a look at one metric here's all the details associated with that metric here in prometheus now this is interesting this is the raw

prometheus data but we probably want to level up a little bit and look instead at the grafana dashboard over the top of it so let's switch over to grafana and now we can see the grafana dashboard over that prometheus data now we do have some built-in dashboards here so here's some istio dashboards let's take a look at the workloads now in this case we don't have a whole lot of work running in our cluster we can see the times when i turned it off and turned it back on but we get great analytics into the metrics inside of our system let's take a look at the istio control plane yep our control plane looks like

it's doing well um it's using a little bit more disk the cpu spiked a little bit so i may need some additional horsepower in my mini cube cluster but we can get great visualization into our istio cluster that's really cool let's look at jaeger istio ctl dashboard jager now jager is a great ah istio ctl dash board spelling it correctly would help

there we go this is a great way of being able to visualize our system so we can see that our istio ingress gateway calls to our product page that goes to our details page we also have our reviews page that goes to the ratings page this is that network diagram where we can map the content as it actually happens now we see 97 requests coming in here 79 went this way 80 went this way and only 52 went to the rating service that makes sense when we're using version one it didn't need to call the rating service when we've used version two in version three it did well this is interesting but let's zoom in a little bit

and take a look at the kiali dashboard as we look at the keali dashboard once we're logged in i'm going to flip over to the graph and this one is the one that is a little bit temperamental so let's refresh the page a bunch of times because it's showing live traffic i love this dashboard where it shows the the network diagram here's our product page and our product page reaches out to the detail service and that hits the version 1 of this container the product page also reaches out to the reviews service now in this case we were using all three services but we can see that version two and version three actually reach out to

the rating service and the rating service forwards off to version one well if we switch over from version one to version two we should see this traffic increase and eventually this node will drop off completely as we flip over exclusively to version three we'll be able to see this traffic did that deployment go okay is it routing the way that we expect that's where this kle dashboard is really cool we can see these virtual services that replace kubernetes as native services and see the routing rules baked into the service discovery piece inside this dashboard and there hasn't been traffic in my cluster for a while so now it's showing me all these idle nodes that have not connected to

anything so that was really cool we got to tour through both linker d and istio linker d focuses on a really elegant startup experience and so we can see how we can get started really quickly with linker d now often times when we need to go beyond what linkard provides we need to pull in extra plugins by comparison istio is the kitchen sink where we can turn knobs on and off and we got to see a lot of different experiences inside of istio a lot of different dashboards and also a lot of virtual services being able to route in different ways are there any questions here on the demos things that we saw in istio or

in linker d that we'd like to dig into more deeply i'm not seeing any questions yet but if any pop up i'll definitely shoot them your way that sounds perfect so let's turn off um istio ctr istio demo let's turn off the istio demo and focus back in on when we would use a service mesh what did we see in a service mesh well we started with the crawl phase because we can observe secure and control all of the content within our all of the traffic both east-west and north-south traffic within our cluster we could start getting really interesting things like monitoring logging service health when we move from crawl to walk now we get intelligent routing

we were able to do a b channels and interesting thing a b channels and um a b tests and beta channels as we get from crawl to walk to run now we get that network topology diagram where we can see how our traffic is actually flowing in our cluster that is really cool i love that it shows us exactly what's happening in the cluster not what we think is happening in the cluster did we spin up a road container is that container reaching out into all the other containers trying to um infect them that's a good thing to see in our network topology diagram as well as in our grafana dashboards and prometheus logs zac butcher highlighted nicely some of

the things inherent in uh istio control plane or in in a service mesh a service mesh has a control plane and zac highlights that here if it doesn't have a control plane it's not a service mesh now don't we already have a control plane in our cluster don't we already have well the cooper daddy's control plane yes on the left we have all of the details associated with our kubernetes system we have the control plane where we have fcd and the api server and various content associated with those control plane machines then we have the worker nodes and inherent in the worker nodes we have the cubelet the cube proxy we might have c advisor

all of this content is in addition to the workload that we're doing in our pods in containers we're going to add to that the service mesh now the service mesh has a control plane things that uh create these certificates and validate their trust chains things that keep track of the mechanism of routing and understand when a container should or shouldn't be routed to we have the mechanisms of being able to connect from those sidecar proxies into this control plane and then finally we have the sidecar proxies alongside every other container we just doubled the number of containers running in our cluster if we're just trying to include a service mesh purely to check the checkbox

then we may have failed ourselves there is a non-trivial amount of compute cost associated with running twice as many containers now these sidecar proxies are probably a lot smaller than the big tomcat java applications that we're running inside these containers so maybe the compute isn't twice the amount of compute just to know twice the number of containers maybe it's 1.4 times the compute workload but it is a non-trivial amount of compute add and so we do need to provision that additional compute inside of our kubernetes cluster to be able to handle the additional needs of our control plane of our service mesh if we're not ready to spend extra on our service mesh then a service mesh

may not be a great fit let's dive into that a little bit deeper deeper a service mesh is great at doing that observe secure and control processes where we can watch all the traffic flowing between all of our services we can create policies that regulate that traffic and we can ultimately secure the traffic between each container without needing to change our code to make that happen observe secure and control when should i use a service mesh a service mesh is great when we need to separate and regulate and secure traffic going between containers if we have high trust workloads like pki or pci where we need to take that pki information or the credit card

information we need to make sure that that is very specifically separate in our cluster we'll create a service mesh we'll create policies that only the known containers that need to can access those pci or pki containers by comparison we might have untrusted workloads maybe we're running workloads for others those probably need to get segregated as well now we want to generally reach for namespaces in this case kubernetes namespaces are organizational they are not security boundaries yes i can't schedule something in somebody else's namespace but once i do schedule something in my own namespace that container can reach out into any other container in the cluster by default a service mesh allows us to create those

lanes perhaps we're creating a multi-tenant environment where each tenant is granted one namespace we can create those very specific swim lanes with our service mesh to ensure that that tenant can only operate inside their namespace by default we think of namespaces as security boundaries because they are launched security boundaries but they're not operational security boundaries a service mesh can be that operational security boundary so if i need security in depth if i need to be able to route all traffic between all containers over https this is a really elegant way to do that because i can do it without impacting my applications themselves the applications route everything through those sidecar proxies and the traffic between the application

and the sidecar is within that local network inside that pod so there's no unencrypted traffic flowing across my cluster that can be a really nice thing to have finally if i need advanced networking like an a b channel an a b routing scenario or a beta channel if i need that advanced scenario routing then a service mesh can be a great way to achieve that now these are some of the places where a service mesh makes sense if i need to be able to observe secure or control traffic flowing in my cluster either east-west or north-south should i pull in a service mesh well do you have these needs if you don't have these needs a service mesh's

additional compute costs may be enough of a downside to say no i really don't need a service mesh but when you do need it when you need to be able to do mutually tls between your containers when you need to be able to create those connection policies a service mesh is a perfect solution for being able to take that additional level of control this was really fun getting to show you service mesh in all of the details if you're watching this video later hit me up on twitter at rob underscore rich and these slides are available right now on robrich.org for those who are here live at the conference what are your questions what are your

thoughts what did i miss well rob while we're waiting for questions to come in i had one myself before you started talking about performance impact that was the number one question i had it had some great benefits everything that you were talking about seemed wonderful but it looked like it would come at a pretty steep performance impact and when you were talking about roughly a 1.4 x performance hit whenever you're enabling this is that a rough estimate do you see some service meshes are more or less of a performance impact than others i do want to clarify a little bit this is additional compute content from a performance standpoint it's probably a lot less impact than that we are computing

certificates and validating certificates but it's not like all of our requests will be you know 50 slower our requests themselves will probably be roughly the same speed i bet you won't notice the performance impact at all you'll just note that you need well more compute power to pull this off thanks for clarifying there we've got a question coming in from carl it says how wide is adoption for service mesh how wide is adoption the example that i found is this one and i think it's really cool um f50 f16 fighter jets have a kubernetes cluster on board keeping track of all of the different controls and metrics and dash dash dials and it's a kubernetes cluster running

that plane so they wanted a service mesh to be able to control the traffic between all of those containers running inside the plane they were debating you know do we accept the licensing with istio or should we go for link or d because it's a more favorable license i'm not sure which one they chose but it was really interesting to see um a service mesh in an airplane how widely adopted it is i would expect that a lot of organizations that want to hit that security check box pull in service mesh really early i'm not sure if that's the best choice but i suspect that you'll find pretty wide adoption once you graduate to understanding what

a service mesh can do you may want to wait until you have those needs before you pull it in speaking of understanding what it does and needs we have a question here talking about moving pieces uh microservices or service architecture it seems like a service mesh would be the best security practice if security is your only goal would a service mesh still be right for you exactly um because it gives you that mutual tls between the containers with a trust chain a service mesh can be a great security product now is it the only product that can do mutual tls you might be able to find a product that is not a service mesh that just does the mutual tls part

or maybe you can bake in the certificate validation into your application as well and maybe get a little bit lighter weight experience but i love how service mesh kind of abstracts that away from the applications so that each app developer doesn't need to worry about it if security is your only concern then i think service mesh can be a really elegant solution provided you can handle that additional compute cost we've got someone typing it's an exclamation he said that's an excellent explanation perfect i'm glad that worked all right was there anything else rob nope this was a lot of fun hit me up on twitter at rob underscore rich and let's continue the conversation grab the slides right now at robrich.org

thanks for joining us everybody have a great day rob thanks for talking yes