
I'm going to hand it over to our our next presentation. As you can see here, confidential computing and our speaker Jordan [Applause] Mikum. All right. Hey everybody. We're going to talk about confidential computing today. I have a link to the slides, the QR code, or you can go to my website and then there will be a link to the presentation there if you want to take a moment to go there if you think the it would be useful. I'll put some links as well to some useful articles throughout the slides. So, you might want to grab this and so you can reference to that stuff later if you find it interesting or helpful. Um, all right.
So, actually, let me adjust this thing here a little bit. Okay. So, normally when you send data to a server, you have no idea what actually happens to it. But what if you could? What if there was some way that you could cryptographically prove how your data was handled and processed inside of a secure enclave isolated from everything else? You can, and that's what confidential computing is all about. And that's what we'll spend the next 45 minutes talking about, really understanding how that works. So, a quick little who am I? Uh my name is Jordan Mikum. I work as a security engineer at block on bitcoin security. Um generally my focus is on low-level systems embedded applied cryptography
and things like that. And today we're going to talk about confidential computing. So first we're going to give an overview about exactly what that is, where it fits in um what it means exactly. Then we'll talk about the threat model, where it helps uh secure your systems in addition to the things that you're probably already using. We'll spend the majority of the time talking about a technical deep dive about what confidential computing technologies exist, how this stuff gets actually implemented, and also what's available in public cloud so you can start using it today. And then lastly, we'll end with some applications and industry examples so that you can better fit how this or better understand how
the stuff might fit into products or projects that you may be working on. Okay, so let's go ahead and start with what is confidential computing exactly. So, confidential computing is, if you look it up online, is a set of technologies to process customer data in a secure and privacy preserving way. But for me, that definition is a little bit jargony, it's kind of hard to grasp exactly what that means. And while it's accurate, I don't find it to be super digestible. So, when I think of confidential computing, I think of it as this. It's when you process data in a cryptographically attestable trusted execution environment or TE. I'll just call it T from now on for
short. And this is really what confidential computing is all about. So here on the left we have a very simplified view into like a normal back-end architecture where you have some server with some workload help here helpfully described as do stuff which you know maybe it writes to some database maybe it reads from it over the network maybe it processes customer data whatever but then with confidential computing we add in the two orange boxes. So you move do stuff into the trusted execution environment and then you run that code there and then the trusted execution environment or T can output some attestation statement or document that makes claims about what ran in the T and those two orange boxes
are really what confidential computing is. So adding those orange boxes to a normal back-end system that makes it confidential computing. So when do you need confidential computing? like why is this helpful? So, it's really useful in two cases. One is if you're running code in the cloud and you want to reduce trust in the cloud service provider, you can use confidential computing hardware isolation technologies to help pro uh protect and reduce trust in the CSP or cloud service provider. And we'll see exactly how that works in a little bit. The second is if you want to use that at testation statement to prove to customers how you're handling their data. So you might want to, for example,
in more common use cases nowadays like LLMs or cryptocurrency or traditional payments or healthcare be processing like really sensitive data and you want to give customers some assurance that you're actually doing what you say you are. Confidential computing can help with that by giving those customers a claim about what you're actually doing with their data. And we'll see how that works as well whenever we talk about the details. Yep. Okay. So now let's look at the confidential computing threat model to understand how it fits in. So this is the traditional cloud model with the traditional CSP. And in this model basically everything about the cloud service provider is trusted. So the tenant which is you know you or
the company that you're working for who's running code on the CSP's infrastructure you trust everything about them. So you trust the operator or the personnel, the hypervisor, the host operating system and everything that's running there, including the chips that they've chose to put in the servers on which your code is actually running. And then by extension, the user also trusts the tenant and the CSP fully. So essentially everything is in this trusted box and nothing is in the untrusted box. But with confidential computing, you basically shift trust away from the CSP and you put it instead into the hardware manufacturer. So the tenant doesn't have to trust the CSP software as a result. Instead, you move that
trust into, for example, AMD or Intel or or whoever's chips are actually, you know, running the code that you're running within the cloud. And by extension, the user also doesn't have to blindly trust both the tenant and the CSP anymore because of the attestation statements that originate from the hardware. They're really able to uh reduce trust there as well. Uh so essentially we shift trust two boxes of that you know from before into the untrusted bucket and that's good because the less trusted things you have in the system the less you have to worry about overall. Well whenever we talk about the technical details we can see how these claims are are true. Um okay and for an informal threat
model of when confidential computing actually fits in. Basically it's useful when you are processing customer data in use. So like when your code is actually doing stuff with customer data in RAM in registers that's when confidential computing is relevant. Um and it also is relevant for just the general you know you want to reduce trust in the CSP so you don't trust the host operating system or hypervisor or things like that. That's where confidential computing helps mitigate but it doesn't help against you know many other things. So it makes no claims about data at rest. You know you still have to do data at rest encryption yourself. It makes no claims about how data gets into your
network over um or into your system over the network. Um it makes no claims about availability guarantees. So the CSP could of course just pull the plug and and you wouldn't be able to do anything about that. And then lastly, you can of course write application vulnerabilities and side channel attacks are also still a concern. So it helps of course, but you know, you still have to use all of the other things that you're already using TLS, database encryption, and and all all kinds of things like that. Okay, so now that we understand kind of what the definition of confidential computing is, where it sort of fits in, what the informal threat model is, let's
look at the building blocks that make up any system that is confidential computing to kind of really understand how the stuff works. So again, confidential computing is when you have a cryptographically attestable T. And so let's break down those two parts, remote attestation and trusted execution environments. And we'll start first with T's. So a T is just a hardware isolated place to run code. That's really all it is. And T's are really common in mobile phones. So you might have heard of like iOS's secure enclave, perhaps Trust Zone, ARM Trust Zone, or maybe Android Strongbox. These are all examples of T's that are available in mobile platforms today. But Tes are kind of less common
on the server. But confidential computing is really that use case where TE's start to become more and more relevant on the server. Generally, there's kind of three different architectural ways to approach building a T and actually getting that hardware isolation. So the one on the left is when you have totally uh separate systems. So you have the T and you have the SOC or system on a chip which would be like the main CPU, the main thing, the main processor essentially. And in this model, the T is a separate chip from the main SOC. And so they're different chips on the printed circuit board and they talk to each other uh over some like you know
comm's channel. This is the most clear and obvious form of separation. Of course, if the SOC is compromised, there's no immediate compromise of the T in that model. Um but it's also expensive. It's literally another chip and it's also not as fast. So it's not so common. The middle one is much more common and it's kind of what we see now uh in many cases which is where the T is a co-processor within the same silicon die but it's a separate CPU from the main SOC. This is really the example that we're going to see for uh all of the um instant instantiations of confidential computing that I'll talk about in a little bit. And the last one
isn't really common in the server case but it is common in mobile and embedded. This is the ARM trress zone model which is where you just have one processor but you essentially introduce another privilege separation level similar to kernel and user mode separation. There's the secure world and the insecure world. And so you kind of sorry you kind of have this four um different regions that you can separate into. I just want to highlight though that T's are not magic. Just because you put your code in a T doesn't mean that security vulnerabilities go away. your code is just as exploitable as it was inside the T outside uh or outside of the T inside. And so any issues, any
breaches for example in CI/CD, any bad third party dependencies or any bugs that could lead to remote code execution can all affect code within your T. It's just a sandboxing mech mechanism really nothing more than that. Okay, now let's take a look at remote attestation. So remote attestation is when you verify the authenticity and integrity of the code running within a T and then you create some attestation report or statement or document. These are all interchangeably used terms which a third party can verify. So this is a general remote attestation model that you'll see with essentially any system that implements this and essentially there are four main components. So at the top we have the T
in which the workload is running. This is the isolation guarantee that you get. And then you have the atttor which is the combination of hardware and software which cryptographically measures or hashes the workload that's running within the T at a specific point in time. The atttor then outputs some signed attestation statement which somebody can verify. The atttor I want to note has like a unique private key provisioned within it that will chain up using like normal PKI kind of um certificates uh that anyone can can you know check the chain of trust for um and then the verifier will look at the attestation report and and verify it if it meets their policy. And so the
verifier's policy can be different depending on what the exact goals you're trying to meet are. But for example, a pretty common policy that you might implement within your verifier is you look at the attestation statement. You make sure it's signed appropriately with, for example, some private key that chains back up to AMD's root of trust for secure processors, for example, for this purpose. And then also you make sure that the hash of the code of the workload that's in that attestation statement matches some pinned hash that you trust. So that would be an example of like what a verifier would do. And then the verifier would output yes or no based off if that report meets the
policy that it implements. And then lastly, you have the relying party which is the thing that ultimately consumes the output of the verifier and then does stuff based off that output of that verification. Uh I want to note that the verifier and relying party can be the same piece of code. They don't have to be different. In a lot of systems they are different for various reasons. So I separate them here for generality. Now there's this term that comes up a lot with confidential computing which which is this this term called a key broker and I want to just go ahead and note it here. So essentially you know confidential computing requires remote attestation to prove that certain code
ran. That's what we just saw. But the environment that verifies the attestation statement can't be the same one that produced it because then you would have like a circular trust problem. So you need some separate service that verifies the attestation and then in many cases what it does as a result is it provisions keys into that place. And so this this system it's often called a key broker. So if you look this stuff stuff up afterwards you'll probably see that term and that's what it means. The other thing I want to note is this idea of reproducibility which is really important to get the guarantees that I've been talking about so far where a third party can verify your code because
for a third party to meaningfully assess your attestation statement that you output the code which measured has to be open source so that somebody can look at it and be sure that it's doing what they think it should be doing and then it has to be reproducible so that they can build it get the same hash that went into the attestation statement and then if they do then they can be sure once they get a signed attestation statement that some piece of code is running that they think should be running and they know it's running in some kind of an kind of environment that has been attested to by some system and then that's how you get overall the remote
attestation kind of guarantee that you're looking for okay so if we put it all together you have a T which protects against untrusted system components and that includes the CSP's code and again we'll see a little bit More details on how exactly that works in a second. The verifier can check an attestation statement and then that gives them assurance that some expected piece of code is running within a T. So you have data isolation and you don't have to put blind trust into the server anymore because you can verify these attestation statements and then as a result you have this trusted place where you should feel comfortable to send confidential data and that's where the name kind of
confidential computing comes from. you have this like confident confidential space that you trust that you can send data. Okay, so now that we've kind of got the building blocks, let's look at the specific technologies that actually implement this stuff today. And there's three main ones that I want to talk about. I'll spend the majority of the time on the first two with just a little note at the end for the third one. So let's look first at AMD's offering which is called AMD SEV SNP. So AMD SEV SNP is the current leading confidential computing or CC as I've abbreviated it here technology and it's available on GCP, AWS and Azure. Now here on the right we have an image of um
an AMD chip with the PSP which is the platform security processor which kind of roots is like the root of trust for this confidential computing technology. Um it's also called the security processor or secure processor. It's kind of the newer name and so I'll refer to it as that. And so the PSP or SP implements that middle diagram from before where you have the trusted execution environment within the overall system on a chip and the SP is responsible for handling a lot of the critical security features of AMD's chips overall like boot initialization, memory encryption, remote attestation and things like that. Now you may wonder why did they choose to implement this um this model where it's a chip you know a
co-processor an ARM co-processor within the same AMD silicon die and the reason why is primarily for efficiency and for cost and the reason why it's a separate chip instead of just being part of the overall chip is that there's various you know physical or firmware mitigations that exist within the SP that can't really apply to the overall chip. So, for example, you know, it runs signed firmware. It has debug access that's locked out. It's not really meant for general purpose development. It's like this privileged black box that has insight and security decision-making into the rest of the system. So, it needs to be isolated, but at the same time, as we'll see in a moment, it needs
to be fast. So, that middle architecture really makes sense. I also want to note that the SP itself is not the T. What it does is it enables creation of T's. And we'll see exactly what that means in a moment. So to understand how SEVs S&P works, you have to look at essentially years of technology that AMD has made leading up to it. And so we'll do that. But the naming it's a little confusing. Sorry, it's like a alphabet soup. So I'll try to make it clear, but uh yeah, just bear with me. It's just what the stuff is called. So the first one that kind of builds up the SEBS&P guarantees is this thing called secure memory encryption.
So this was invented like prior to the overall like confidential computing technology that I've been talking about so far. So theme in isolation is not confidential computing but it's a necessary building block to give us confidential computing overall. And what it is is it's just a very general and um efficient uh way to encrypt CPU RAM. So data gets encrypted when it's written to DRAM and then it's decrypted when it's read back. That's the idea. And this is all transparent to the CPU. So over here on the right, you can see that the CPU just sees plain text data, but there's stuff going on behind the scenes. So the secure processor, the red box from
before, communicates a key to an AES engine that's within the memory controller of the AMD chip. And the memory controller optionally encrypts data when it writes it to DRAM and decrypts it when it reads it back. And it's optional because the encryption happens at the page table level. So withme enabled there's this additional bit called the C bit or confidential bit which determines if um you should encrypt that uh page of memory and if it's set to one then it will and if it's set to zero then it won't. So you can kind of optionally enable it. Okay so uses a cipher mode that's pretty interesting. It's called XEX or XTS is also supported. So basically the
same thing. So we'll just look at XEX for this. Um and so this is the diagram that implements XEX. But before we dig into that, you might be wondering like why would they use you know XX instead of just using like ASGCM or something that's used in a lot of other settings that is perfectly fine for that. And the reason why really comes down to efficiency. So GCM requires that you have a separate IV or nons per um operation which is not efficient and we don't want to carry that state around for DRAM which is like really high efficiency um hardware that has like a lot of throughput demands and so GCM isn't really suitable but at the same
time it does provide an important security mitigation which is that if you don't you know if you basically use no mode as ECB then your plain text blocks are going to encrypt to the same ciphertex block. And so we want to mitigate that because that would reveal structural patterns in the data, but we don't want to have an IV. And so that reason is why we use XEX, which gives us that guarantee. And so the way XEX works is we introduce this concept of a tweak, which is based off the physical address in memory where we're writing that plain text. And so we take that um the physical address and essentially exor the plain text that goes into the
underlying block cipher in this case AES before we write it which means that if you write the same plain text to different locations in memory it's going to result in a different cipher text. Um this is also very efficient because if you know there's no linkages between the different blocks which means you can implement this all in parallel which is important for DRM where speed is essentially the most critical thing. Um, you may wonder why there's like a second exor there at the bottom. So the first exor at the top, that's what gives you that guarantee where you have different mappings for the same plain text under a different physical address, but the the necessity for the second exor is not
quite as clear. Um, the re there's a few reasons, but the reason that matters the most for our purposes is again efficiency. The second exor makes it so that encryption and decryption are symmetric operations. You can use the same hardware to implement both encryption and decryption. Okay, so is secure memory encryption with encrypted state and it's a very natural extension ofme which is what we just saw where instead of just encrypting DRAM you encrypt CPU registers additionally when the system sleeps or hibernates um you know we we don't really have enough time to talk about the details of how that's implemented or anything but just know that that's what's used going forward it's like a strict enhancement of theme
and that is what SEV uses which is what we'll look at So SEV is secure encrypted virtualization which combines SMEES with AMD's virtualization technology called AMDV which gives us the ability to encrypt virtual machines and this combined with the fact that SUV implements remote attestation gives us that confidential computing model that we've been building up to. Um so the way it works the idea is that SAV SEV essentially gives you a unique key per VM and so with theme before we basically just had one key but the extension is that now every VM is tagged all the data and code for each VM has this ID that's associated to it within the um secure processor that for that key that gets
communicated to the AAS engine in the memory controller and then every VM has its pages encrypted with a key that belongs to just it. So like VMA has the key A, VM B, key B, and so on. And so VMs can't see each other's memory. And then the hypervisor can't see the VM's memory either because recall that that key is not a key that's communicated to the CPU. So there's not any code that's running on the CPU that can just look at that key or try to use it. It's only exists between the memory controller and the secure processor. So with SAV we have essentially VMs that are confidential to each other and confidential to the hypervisor. Um but
do note there is one you you sometimes need to communicate crossVM or with the hypervisor. So you can still have secure memory encryption in that scenario. There's just another key kind of like a global key that's not pictured here that you can use for that purpose and it's up to the VM to essentially mark its pages as confidential just to itself or encrypted but with that global key. And so the hypervisor, you know, it could try to read the contents of the VM, but it wouldn't be able to. It would get like either a fault or it would get just garbage because it doesn't have the key associated with its code. And so it wouldn't get the right key to actually
decrypt the the contents of the VM. Okay. SEV also implements attestation which looks very similar to what we saw previously. Essentially you have the secure processor which uh measures both the guest VM's memory contents at launch along with various platform configuration about the system which includes things like the secure processor's firmware the chip ID what settings are enabled like is SUV enabled at this point in time or not for example and things like that and then the uh secure processor has a private key provisioned onto it unique to it that chains back up to AMD's route and then it signs that attestation statement along with all those measurements within it and then it gives that uh attestation
report up to the verifier which in this case is the guest VM who can then verify it or forward it along to some other um entity if it wants to. Okay. So there is one issue though with SEV which is there is a lack of integrity. So there's nothing in the system and nothing that I described so far that gives guest VM integrity which means the hypervisor can tamper with guest memory remap it and so on which is a pretty obvious problem and that's why SEV SNP exists. So this is SEV with secured nested paging which just essentially replaces SEV it fixes that problem and uh this was introduced in 2020 so this is you know relatively
recent. So basically there are two features I want to note about SEV SNP. The first one fixes that integrity problem and it's called the RMP or reverse map table. Basically it's a new CPU data structure that associates the owner of each 4K page of memory. So the ownership could be you know the AMDSP could be the hypervisor or it could be a particular VM and then it just enforces that only the owner of that page can write to it. Uh this is something that's in addition to like normal x86 page tables. This is a um you know because the x86 page tables are are owned ultimately by the CPU. This is owned something outside of the CPU the secure
processor as well as the additional hardware added for the RMP. So this is not the normal x86 page tables. It's in addition to it and then it fixes that integrity problem. The other thing is the virtual machine privilege levels which allows a guest VM to divide its address space into different regions with different security levels. And so the way this works is you have different levels VMPL0 up to three where zero is the uh lowest level. And that's where you can implement things like security enforcement and things like this. And then you run the rich OS or the main operating system and your user space in VMPL3. And each VM does this independently. And so what that lets you
do is it lets you implement things like a software impulated trusted platform module or VTPM which is um pretty uh pretty nice and those are unique to each VM. Okay, so that's SEV. Now let's take a look at AWS Nitro Enclaves. So AWS Nitron enclaves are a lot simpler conceptually and essentially what they are is they give you a trusted execution environment plus remote attestation within a particular EC2 instance and I want to take a moment to note something about the Nitro enclave trust model because it's a little bit different from the AMD SEV one as well as the Intel one which we'll see in a second. So while Nitro Enclaves do implement confidential computing's two main components of a
trusted execution environment plus remote attestation, the trust model is a little bit different because AWS is ultimately the one who is implementing the hardware that backs the entire Nitro Enclave story. So you're not really shifting trust into a different place because AWS is both the hardware vendor and the cloud service provider. It's not quite the same as like AMD or Intel story. So while it's still a very useful feature like uh gives you a lot of important guarantees and and gives you a confidential computing story it is slightly different. So if that's important to you just keep that in mind. So the way AWS Nitro enclaves work is you have your EC2 instance which gets
divided into two chunks essentially. You have the host or parent EC2 instance which is running the normal user space and uh probably Linux kernel. Um, but you have a carved out region which is the enclave which doesn't have persistent storage and it also doesn't have network access and then that communicates with the parent over a VSOC and then so you put your the the the enclave is essentially the T and then you put whatever in there and then you communicate with the parent EC2 instance. You can request attestation statements from the Nitro hypervisor and that's ultimately rooted by the Nitrochip. So, Nitro Enclaves run a um format that's particular to them called an enclave image file or EIF, which is a
format that includes a Linux OS um and whatever enclave applications you choose to put in there. Um but the nice thing about EIFs is that they can be code signed. So, this lets you put the code hash and signing certificate and signature inside the attestation statement. So instead of just getting a hash of a particular piece of code running at that point in time, your atestistation state statement can include just like a normal code signing signature and certificate which lets a verifier have a bit more flexibility when it comes to its policy. Because if you recall before we we imagined a verifier who would just look at a particular code hash and make sure the
attestation statement was signed and it matched some thing that it pinned. But with this you could say okay I'll trust any valid signed at testation statement and I'll trust anything that's code signing by some particular code signing authority. So you don't have to pin to a specific hash you can just trust some you know pub key that you decide to trust. And so that's a a bit more flexible. Um it's a different guarantee but you know maybe that's useful for your use case like for an internal use case for example instead of trying to provide some assurance to like a third party that some specific piece of code was running. So I mentioned nitronclaves implement at
testistation. Uh basically the diagram looks exactly as before. So instead of repeating the same diagram, here's an example of what the attestation statement actually looks like. So there's various fields here. Um most of them are probably unsurprising like the module ID is the chip ID. There's a time stamp for when the thing actually got measured. But one of them is particularly not self-explanatory, which is PCRs. That stands for platform configuration registers which is a term that comes from trusted platform modules. The details about exactly what that means doesn't matter for now. Just know that it's basically a hash map or a map that has some index into some fields and within that map includes the code
hash of the code that actually got measured and it also includes like the signature if you choose to use like code siting and things like that. You can also pass in a public key user data and nons. So that lets you do things like bind some secure channel that you might set up between a Nitro enclave code running within your Nitro enclave with some code running outside of it and tie it all together with the attestation statement for example. Okay, so Nitro Enclaves, one thing that's really nice about them is that they have a uh tight binding with AWS KMS where you can configure KMS key policies such that a key, for example, is only usable if KMS is presented a
fresh uh attestation certificate from a Nitro enclave that let's say matches a specific hash. So you can have really restrictive policies about how your KMS keys can actually get used, ensuring that they're only used from specific pieces of code that are running within Nitro Enclaves, which is quite useful. Okay, now let's take a look at Intel's offerings. So Intel's offerings are newer. Um, but basically TDX or trusted domain extensions is like the Intel equivalent of AMD SEBS S&P. Um, it's a little bit newer similar to AMD's offering. It's built off years of technology with SGX sort of being the foundation of that. Uh I don't really have enough time to go into the details of this. Uh but just know that while
there is um offerings on public clouds for TDX, it's not quite as robust as SEVS&P. In 2024, I believe it was September of 2024, Google announced support and then in January of 2024, uh Azure announced support for TDX and AWS doesn't have support for TDX yet. Okay, so let's look at confidential computing offerings in the public cloud. So what is there right now today? So AWS of course has Nitro Enclaves which are unique to AWS. Um the they're quite easy to work with. They have very solid documentation and there's some really good third party documentation by Trail of Bits that if you're going to work with Nitro Enclaves, I really encourage you to check out. Uh the EIF tooling I
found to be a little bit lacking with when trying to work with it. I built some tools and some kind of like wrapper libraries to assist with that if you want to try using those. Um but again I want to highlight that nitro enclaves compared to the other technologies svnp and tdx are like really ready to use and quite um easy to start building with. Um so AWS does offer SCV S&P as well. Uh but they don't have a lot of like infrastructure around it to make it easy to build on top of compared to GCP and Azure. Um and you might wonder why would AWS offer both Nitro Enclaves and SEVS SNP and the reason why I mean there are
various reasons why like one is for example the trust model between the two are different as we noted earlier. The second is that nitro enclaves are not they don't have network access. So if you run code in a nitro enclave you probably have to write your code specific to that nitro enclave. you unlike it's unlikely that you would be able to just lift and shift some workload that you're already running like some web server for example and put it in a nitro enclave like you're going to have to change your code whereas with SCBS&NP it doesn't have that restriction so you could plausibly do that and so if that's what you need or if that's what
you want then it would be more suitable for that kind of use case okay so GCP doesn't have anything like Nitro enclaves instead they have this thing called confidential VMs which are basically like um their managed or their wrapped offering around SEVSMP and also Intel TDX. The support for SEVSMP is better. Um they also have something built on top of confidential VMs called confidential spaces which is a particular use case in mind where you have multiple parties who want to submit data to some place and then not expose it to each other but collaborate still and and so confidential spaces is meant to make that particular use case easier to to do. Azure also has an offering called
confidential VMs and it's basically the same as GCPs. It's built on top of the same stuff, SEBSP as well as Intel TDX. Um the thing to note about Azure is that they were actually first to market with a lot of this stuff and so you might find that they have you know more documentation. There's more discussion online about Azure's offerings in particular. Um so you know keep that in mind. There's also a bunch of really nice open source projects built around this stuff and they're being actively developed on. Um, these are things to either make it easier to actually deploy this stuff or to write code that runs within trusted execution environments. So the first
one, Enclaver, is essentially like a wrapper around everything that I've talked about so far. that's really just designed to make it easier for you to build a confidential computing based system uh without having to worry about the super low-level details as much. Uh Quorum Quorum OS by turnkey is something that is designed to allow you to build a trusted execution environment application a little bit easier, let's say. And then the third example is a um basically a minimal uh example of getting code working on an AWS Nitro Enclave. So if you want to try that out, uh you can and base it off that as a template and it should get you up and running quite quickly. Uh the last thing
I want to note is Google's Project Oak is a really ambitious project that's within this confidential computing space. It's very cool. It's worth checking out. It's not something meant for like third parties to use, but if if you think it's neat, then you may want to kind of see what they're doing. Okay, there's a couple of little nuances and subtleties with everything that I've talked about so far, and I just want to highlight those. So, the first is this issue of guest VM firmware and how you actually verify that. So, essentially, whenever you launch a guest VM, the initial code that's running is often called the guest VM firmware. And the VM firmware that's often used is
called OVMF or open virtual machine firmware. And in many cases in the uh public cloud offerings, the OVMF binary that's used in the guest VM isn't something that you provide. And in fact, it's also closed source. So if you want to be 100% sure about what guest VM firmware is running, you really can't unless you're able to bring it yourself or if the CSP makes that open source and also makes it reproducibly built. Uh this is uh you know kind of a recognized issue. There's some discussion on various forum threads about this and whatnot. Um so you know it's up to you to decide if this is something that's important to you or not. Um, but I would
I do want to note that I think on SC or AWS, uh, if you have a bare metal EC2 instance, I believe it's possible to bring your own OVMF, but I haven't tried it myself, so I'm not sure if it will work, but um, I think it's possible. The next issue is this idea of extending the chain of trust. So, if you recall, SEVS&NP gives you a measurement of what is running in the guest VM firmware at launch. it doesn't actually inherently extend the chain of trust up to your user space or even your kernel. And so if you want to do that, you're going to have to modify some code as part of your boot sequence. And stock
OVMF doesn't inherently do this. So there's this idea of like runtime measurements that you that you want to get. And so for example, you could use like a virtual TPM to give you runtime measurements, but then also you need to worry about like the whole boot chain. So there's this paper uh called S&P Guard that basically modifies OVMF to solve this problem. Um they forked it um but I don't think it's upstreamed into main OVMF. So if this is also something that you're concerned about, you'll have to maybe like look at that paper, see what they're doing, see how it makes sense. And then the third thing is also just the possibility of implementation flaws, which is you know something that
exists in any system. So Google Project Zero and Google Cloud Security did a really good audit of SCVs S&P in 2022 and they found several issues and you know this really just highlights that just cuz you moved trust assumptions around that doesn't inherently make security issues go away and we ultimately have to trust AMD or Intel or AWS to get these things right if we want the correct security asurances that we're looking for. Um, also on AWS Nitro Enclaves, there's this really good blog post that I encourage you to check out that highlights some of the nuances and subtleties with Nitro Enclaves. It's nothing, you know, bad or anything like nothing major to really call out, but,
you know, it just kind of, you know, makes you understand how the system works a little bit better. Okay, with the remaining time, I want to talk a little bit about some applications and examples on how you might go about using these technologies. So some examples of products that are using confidential computing are uh with cryptocurrency. Block has a self-custody bitcoin wallet called bit key which uses a uh nitro enclaves and we'll look a little bit about exactly how those are how that works in a moment. Um for password managers, One Password recently had a blog post where they published how they were using um confidential computing for um enterprise reporting uh for password management. Um for machine
learning, Apple of course has private cloud compute and they have a bunch of really good technical documentation on how that works. It's really worth checking out. And then lastly, Stripe did a presentation at AWS reinvent 2023 where they co-presented along with Amazon on how they're using Nitro Enclaves for traditional payments. So these are all really good resources to check out to see how these companies are using this stuff. Um I'll talk a little bit more about BitKey because this is something I worked on. So I'm more familiar with exactly how this operates. The the spinning thing here is the hardware part of the of the wallet. So, Bit Key, you don't really have to know
how it works to understand the point I want to make. You just need to know that it's a Bitcoin wallet and it's a two of three multisig wallet. So, that means you need a quorum of two of three signatures to move your Bitcoin. And the way it works is there's three parts. There's a key on the hardware, which is the thing you just saw. There's a key on the app and then for our purposes, what we care about, there's a key on the server in a nitro enclave. And the way we manage this is there are basically three parts to know. So there's the CMK or the customer managed key which is kind of an AWS term that sets within KMS
that has a policy configured on it such that you can't use it unless you present it a valid attestation certificate that matches a specific code hash that we pin. And then the nitro enclave communicates with Dynamo DB which has two kinds of encrypted keys. It has wrapped decks or data encryption keys which encrypt the other part which is the actual customer bitcoin transaction signing keys. And so if you want to acquire a customer's Bitcoin transaction signing key, what you would do is first you would need to present an attestation statement from the enclave to KMS with the right hash to actually acquire the CMK and then you can talk with Dynamo DB to get the deck which would be encrypted
at that point but then you can decrypt it and then the deck is a symmetric key which would allow you to then decrypt the um particular Bitcoin transaction signing key that you care about from Dynamo DB and then sign a transaction within and the nitro enclave. So, it's a pretty normal way to use this kind of stuff, but it's very powerful primitive that I think fits into a lot of different systems. Cool. So, that's all I have. We have a couple minutes for questions. So, thank you all.
All right, looks like we have a um several great questions here. Uh just as a reminder, the way that we do Q&A here, uh we do it through Slido. So you can just simply take any device browser go to sli.do and then type in the conference code which is besides SF2025. We're in theater 15 under the classification of that system even though physically we're in 14. So under the 15 and there's already three questions filed in there already which is great. Uh so uh the first one here is that uh you mentioned that source code needs to be open source uh for verification but when it comes to some kinds of confidential computing the data
is payment data and healthcare data and especially in those scenarios open source code is not very common uh how practical are CC currently in those scenarios so for for traditional payments that's definitely and healthcare yeah that's a concern I think it ultimately comes down to like the particular compliance requirements that you have to meet, whether that be like HIPPA or some PCI stuff or traditional payments. Uh I don't necessarily know the rules on what they say as it pertains to open sourcing code. I don't know if there's like specific regulatory rules that say you can't or if it's like a pointsbased system that says maybe you get docked points if it's open source or something. Honestly, not totally sure.
Um so yeah, it this is definitely true though. It's less common in these kind of more traditional settings and it's more common in like there's a bunch of uh cryptocurrency startups Bitcoin Ethereum related that are using trusted compute now and then also we see it's very common in LLMs um where you know you want to process the data that you're you're feeding into the uh into the model in a secure way and so it's definitely more common to use this stuff where there are less regulatory and compliance requirements. I don't know if that makes it impossible per se. Like Stripe is for example using it, although they aren't open sourcing it. They're I believe they're using it more for
internal security guarantees, but it's still useful for those internal security guarantees as well. So you can you can definitely use it. You know, it's just I guess I want to note that the reproducibility thing that really is only important if you want a third party outside of your company to be able to have assurance on what you're running. Like you can still have all this confidential computing benefits if you're not doing that. It's just that a third party wouldn't be able to get assurance on what code you're running. So, all right. Uh, so Intel hasn't had a great security track record with STX. And since TDX relies on SDX Enclaves, have STX's flaws affected TDX's
perceived trustworthiness and industry uptake? Yeah, it's a great question. I actually don't know the answer too much. I have looked much more at SEVS&MPP and how that's implemented compared to TDX. Uh, so I don't really have a great answer on this one, but yeah, totally an issue. There are a a rich history of vulnerabilities with SGX and I'm not sure if they've patched them with TDX or not, but it's definitely something worth checking