Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workstream: Hardware Attested Platforms #975

Open
1 task
chkimes opened this issue Oct 3, 2023 · 27 comments
Open
1 task

Workstream: Hardware Attested Platforms #975

chkimes opened this issue Oct 3, 2023 · 27 comments
Assignees
Labels
workstream Major effort comprising multiple sub-issues

Comments

@chkimes
Copy link

chkimes commented Oct 3, 2023

This is a tracking issue for incorporating Hardware Attested Platforms, aka Trusted Computing into SLSA. The main idea is to provide greater trust in the build by using trusted computing features like Trusted Execution Environments (TEEs) of modern CPUs to reduce the risk of tampering and to increase transparency.

Workstream shepherd: Marcela Melara (@marcelamelara), Chad Kimes (@chkimes)

Working proposal: #1051
Proposal doc: here

Related: We might want to merge with #977 (Build L4, discussing reproducible builds) and/or #985 (about hardening operations) as discussed in below.

Sub-issues:

  • N/A

In the 2023-09-13 Supply Chain Integrity meeting, @marcelamelara and I presented on a potential new SLSA track, using cryptographic primitives provided by hardware to validate build environments.

Slides: https://docs.google.com/presentation/d/11cycDxYaoZpuG144pR6atI1_zk2CfZOWlNO_f_HhhyE
Doc: https://docs.google.com/document/d/1l7IKAli-K-uof8VkLuiqV5-hMGS_ecDmBcuc07-ILeQ/edit
Recording: TBD pending upload to YouTube

Some points for discussion, seeding some from the SCI meeting:

  1. Is this a new track or an extension of other tracks?
  2. I've labeled this as the Build Platform Operations track, however the Future Directions page defines a set of requirements that are likely only verifiable through audit, whereas the attestations defined above are verifiable at runtime. Is this the appropriate track to be defining these in or is there yet another track to distinguish these?
@MarkLodato MarkLodato added the workstream Major effort comprising multiple sub-issues label Oct 10, 2023
@MarkLodato MarkLodato changed the title Build Platform Operations Track tracking issue Project: Build Platform Operations Track Oct 10, 2023
@MarkLodato
Copy link
Member

I think this isn't really specific to Build platform. Maybe just "Platform" or "Platform Operations" or "Platform Security" track? For example, once we have a Source track, the same requirements will likely apply there, too.

@chkimes
Copy link
Author

chkimes commented Oct 10, 2023

@MarkLodato I'm trying to reason through what you mean by that but I'm not sure I follow. The doc describes a handful of cryptographic operations that can be executed in the build environment, not only by the platform to provide their own attestations about the environment but also by a consumer who wants to perform their own validation. I think this property is unique to CI because the consumer can execute whatever they want inside the build environment.

We are also looking at fleshing out some requirements that can't be validated in the same way - the distinction we have been drawing there is that some requirements are "verifiable" whereas others are "auditable" (e.g. requiring a third-party investigation and attestation that certain requirements are being met). Is that where you see the overlap with things like the Source track?

@MarkLodato MarkLodato changed the title Project: Build Platform Operations Track Project: Hardware Attested Builds Oct 11, 2023
@MarkLodato
Copy link
Member

I'm sorry, I saw "Build Platform Operations" but didn't read the actual proposal or your questions. Upon further review, I agree that it's not "Build Platform Operations" but more of a "Hardware Attested Build". I also am not sure it's necessarily a new track. I'll update this issue and create a separate one for the Operations track.

@MarkLodato
Copy link
Member

MarkLodato commented Oct 11, 2023

This also seems related to the reproducible builds discussion in #977. In both cases, we want stronger guarantees that the build platform is not compromised. With reproducible builds, we do that by running the build multiple times by independent parties. With trusted computing, we rely on hardware built into modern CPUs. In both cases, we increase the cost of attack.

It also seems to overlap with Operations track in #985. Yet again, it's about preventing an operator from influencing the build.

Maybe it makes sense to merge all three ideas into a new Build level or two (L4 or L5) that describes this higher property - that an operator of a single build platform has no way of influencing the build, even if they collude with colleagues?

@david-a-wheeler @arewm

@marcelamelara
Copy link
Contributor

marcelamelara commented Oct 11, 2023

I'm going to push back a little bit on the notion that requirements on platform operations (implemented by operators) should be merged with the Build track (implemented by developers). I do think that choosing a verifiable build platform (whether that's reproducible and/or HW-attested etc.) should correspond to a higher Build track level, but from a separation of responsibilities standpoint, I think that defining integrity properties and requirements for build platform operators does warrant a separate track.

The other comment I'll make is that from a platform operator view, implementing a verifiable build platform will require us to dive one layer deeper into the different components. Even though we tend to think of build platforms as a single unit, this isn't really true in practice. For instance, one challenge we've come across is defining requirements for cases in which the build platform doesn't necessarily own/provide all of its own infrastructure (e.g., in GH's case, the build VMs run on Azure, introducing additional operators from third parties). So, we've found that we'll have to identify these trust boundaries and components within a build platform so that SLSA may define requirements for these various pieces. We've made a first attempt at this in our current build platform model. We've laid out a number of requirements for build platform operators in our Draft Doc, though they are currently guided by what is feasible through hardware-based mechanisms. I'd be very interested in hearing others' thoughts on whether those requirements are 1) sufficient and 2) generalizable to other verifiable platform approaches.

EDIT: In case this wasn't clear from my comments above, I want to distinguish between HW-attested builds and HW-attested platforms. HW attestations don't attest to application behavior, they attest to platform integrity and need to be enabled by the platform operator, so I think we need to be careful not to conflate the two concepts.

@jkjell
Copy link

jkjell commented Oct 12, 2023

@MarkLodato's earlier comment got me thinking if this could be generalized to something like a "Hardware Attested Supply Chain Steps". That's definitely a terrible name so, please don't actually use that. 😅

I know that SLSA is primarily concerned with Build today but, I could see additional tracks with recommendations for static analysis, testing, or vulnerability scanning. Even in the case of a source track, many organization are running and host their own source code repositories (Gitlab, Github Enterprise, etc) and will be concerned with tampering of those systems. Or a developer's laptop could be tampered with upon committing code.

From the threats in the document I could see a generalization towards something like:

  1. Tampering at platform boot time -> Remains same
  2. Tampering during build init -> Tampering during step init
  3. Tampering during build execution -> Tampering during step execution
  4. Tampering with build environment image generation -> Tampering with step results
  5. Tampering with/misuse of control plane/build cache resources -> Tampering with cached resources

The modification or tampering with source code, an SBOM, vulnerability database, vulnerability scan results, static/dynamic analysis findings, or many other supply chain steps could certainly compromise a software supply chain.

@MarkLodato
Copy link
Member

MarkLodato commented Oct 12, 2023

@marcelamelara Sorry, I don't follow. What is the difference between "HW-attested builds" and "HW-attested platforms"? Can you rephrase in terms of threats that are being addressed? I think that might help me better understand. (https://slsa.dev/threats may provide some prior art.)

In my mind, the threat model is this: the attacker intends to get the victim to accept an artifact A' with provenance P' even though P' is not (completely) true. For example, suppose building git repo R at commit C on build platform B actually results in an output file whose contents are A = "Hello world!", i.e. the real provenance is P = {A, B, R, C}. The attacker's goal is to get the victim to accept alternate artifact A' = "Goodbye world!" with false provenance P' = {A', B, R, C}, even though building from B, R, and C did not actually (or wasn't supposed to) result in A'.

Threats addressed at each level:

  • L1 doesn't prevent any attacks. The attacker can simply hand-craft P' and present it to the victim.
  • L2 prevents attacks after the build, i.e. replacing the artifact with A' and/or provenance with P', thanks to verifying the hash and signature.1
  • L3 prevents attacks during the build by the tenant of B, such as:
    • A tenant steals the provenance signing key and uses it to sign P'.
    • A tenant initiates a build from some other repo R' (or commit C') and tricks the platform into signing provenance saying it came from R (or C). This could happen if the build platform reads values from the untrusted tenant process.
    • During a legitimate build from R,C, an external attacker influences the build so that it results in output A' instead of A, such as by altering running processes via SSH or poisoning the build environment from a previous build so that the next build picks up an illegitimate binary.
  • Tampering by the platform operators or someone with physical access to the machines is not currently covered by Build L3. These are all the same threats as above, but with a different actor.

I thought that last bullet was what both Hardware Attested Builds and Reproducible builds protect against. In both cases, it is a reduction in the size of the Trusted Computing Base. At Build L3, you need to trust hundreds/thousands of employees, software, and hardware with privileged access. With Hardware Attested Builds, we can rely on Intel/AMD's hardware2; with reproducible builds, we can require the attacker to compromise multiple independent parties. Either way, the threat seems similar.

Am I misunderstanding? Is the threat actually something else?

Footnotes

  1. The value of L2 is quite limited, since the attacker can initiate their own build on platform B and forge the provenance. (Preventing this is only required at L3, right?) This may have some value within a company where an employee risks getting fired for doing such an attack, but it probably has little value for public service where attackers are anonymous. I think we should consider strengthening L2 a bit so that the attacker cannot tamper with the provenance but may be able to tamper with the build. I suspect that would make L2 more meaningful and in practice is how most L2 systems are implemented anyway. I'll file a separate issue for this.

  2. IIUC today's trusted computing hardware has vulnerabilities that make someone with physical access able to extract the secret keys, but I think we can work around this.

@MarkLodato
Copy link
Member

@jkjell Yeah, I think it can generalize well. We might still consider it a "build" in the generic sense, taking some inputs and transforming to some output, with the provenance describing that transformation process. If you identify everything by hash and have trustworthy provenance, then perhaps we could cover a lot?

@MarkLodato MarkLodato changed the title Project: Hardware Attested Builds Project: Hardware Attested Platforms Oct 12, 2023
@MarkLodato
Copy link
Member

Updated the title to "Hardware Attested Platforms" as per @marcelamelara's comment at #981 (comment).

@marcelamelara
Copy link
Contributor

marcelamelara commented Oct 13, 2023

What is the difference between "HW-attested builds" and "HW-attested platforms"? Can you rephrase in terms of threats that are being addressed?

@MarkLodato I appreciate the detailed threat model. In this framing, I'd add that the P tuple should include a fifth element T, which is the tenant-defined build process (per the current Build model). I'd say that an "attested platform" allows both a tenant of B and an consumer of P to verify explicitly that it was B and not B' that produced P, irrespective of T. Reproducible builds, as I understand it, are more about validating that T produced P and not some build process T'. Does this make sense?

@marcelamelara marcelamelara changed the title Project: Hardware Attested Platforms Workstream: Hardware Attested Platforms Oct 16, 2023
@MarkLodato
Copy link
Member

MarkLodato commented Oct 20, 2023

In this framing, I'd add that the P tuple should include a fifth element T, which is the tenant-defined build process (per the current Build model).

Ah, this may be the difference in our mental models. In my mind, the process T is defined entirely by the repo R and commit C (plus any other external parameters, which I elided from the example for simplicity), so it's unnecessary to add any more information to the provenance. Could you give an example of T? I may be missing something.

I'd say that an "attested platform" allows both a tenant of B and an consumer of P to verify explicitly that it was B and not B' that produced P, irrespective of T.

Isn't that the purpose of the signature on the provenance?

Reproducible builds, as I understand it, are more about validating that T produced P and not some build process T'.

In my mind, it's about shrinking the size of the trusted computing base (TCB)

  • With conventional (non-reproducible, non-attested) builds, you have provenance P = {A, B, R, C} which claims inputs R+C resulted in output A. You need to trust the platform B that this was true. In other words you have the full size of the TCB of platform B. This can be very large: physical access, microservices, key management, remote access, and so on. Build platforms are very complicated and there is a large attack surface, whether B is a remote SaaS platform or an in-house platform run by a team.

  • With a system of verified reproducible builds, this shrinks the TCB to the intersection of multiple platforms. For example:

    P1 = {A, B1, R, C}
    P2 = {A, B2, R, C}
    P3 = {A, B3, R, C}

    This gives you greater confidence that inputs R+C really produced output A because three independent parties that you trust, B1, B2, and B3, have all made the same claim. If some but not all were compromised, you would be able to detect the difference and react accordingly.

  • With hardware attested builds, it is about shrinking the TCB of an individual platform. You still have P = {A, B, R, C} claiming inputs R+C produced artifact A. But now the TCB of B and thus attack surface is much smaller. There are still physical access attacks, but they are much more costly to pull off. You still have to trust the software running inside the TEE, but that is way smaller than all of the software in the TCB of a conventional build platform. And you can basically ignore remote access, I think.

So I see both of these as gaining greater trust in the provenance claiming that inputs R+C really produced output A. They are also complementary.

@chkimes
Copy link
Author

chkimes commented Oct 20, 2023

I agree that the two are complementary - I figure that a hardware attested build can strengthen the claims made by any individual reproduced build.

While reproducible builds and hardware-attested builds are targeting a similar problem, I think there is a particular niche that reproducible builds does not solve well. While SLSA is heavily influenced by desires to secure the distribution of open-source software, the framework (and the guarantees provided by it) map well into a closed-source organization if given a suitable private sigstore implementation. I have already heard from a number of our highly regulated customers that they are working to ensure all of their CI workflows meet SLSA L2 and eventually L3 despite the fact that their code will never be publicly visible. Ignoring the security benefits, there are also ecosystem-level reasons for us to encourage closed-source adoption of SLSA - since greater organizational adherence is likely to drive more open source adoption.

For closed-source organizations, reproducible builds are still a desirable goal but the act of actually reproducing them on other build providers is likely to be cost prohibitive. At the very least, there will always exist cost-based negative incentives to doing so. Hardware attestation of build environments provides a lower cost option to reduce the amount of trust required in the builder.

@MarkLodato
Copy link
Member

@chkimes I agree with everything you said (except the bit about SLSA being influenced by the desire to secure open source; see https://cloud.google.com/docs/security/binary-authorization-for-borg 😄.)

That's why I am suggesting that SLSA focus on the problem/outcome rather than a specific solution. If hardware-attested builds and reproducible builds indeed target roughly the same problem, then my inclination is to describe the desired outcome as a single level rather than having one level/track for one solution and another level/track for another solution. But I don't think we yet have agreement that they are targeting the same problem. 😁

@pdxjohnny
Copy link

pdxjohnny commented Nov 5, 2023

@henkbirkholz @fournet

@marcelamelara
Copy link
Contributor

To update this thread. Given the discussions we've had with the SLSA spec community, we've landed on including HW attested build platforms as part of a higher level of the build track. The main reasoning is that the Build Track does already cover both producer and build platform requirements.

@mswilson
Copy link

The attestation function of commercially available TEEs isn't, itself, implemented in hardware. It's typically implemented in software that is provided by a hardware provider (e.g., SGX quoting enclave)

I think that the important property is that there is a trusted third party who is standing behind the attestation, not that it's "hardware attested." They may use a combination of proprietary hardware, software, and business processes to provide a high-assurance attestation.

I would encourage adopting a broader definition / terminology so that many trusted / confidential computing designs can "qualify" if the parties trust the attestations made.

@jkjell
Copy link

jkjell commented Feb 1, 2024

@mswilson the draft document references vTPM and Confidential Virtual Machines. Often times those technologies are implemented at some level in hardware (i.e. specific instruction sets for virtualization). Are there other technologies implementing trusted / confidential computing designs that you think should be included?

the important property is that there is a trusted third party who is standing behind the attestation

I disagree with this. This definition would allow anyone to provide any sort of attestation, with no ability to verify anything, and we take it on trust. This would be akin to a signature on a container image, a black box representation of trust. I see the important property as the increase in transparency about what is being attested to, and the ability for an external party to verify it. In this way, we reduce the level of trust to a third party to the smallest scope possible. That scope is detailed in the threat model.

@chkimes
Copy link
Author

chkimes commented Feb 1, 2024

I would encourage adopting a broader definition / terminology so that many trusted / confidential computing designs can "qualify" if the parties trust the attestations made.

The proposal explicitly allows this if a user wants to achieve it. In a very brief summary, the proposal requires:

  1. A provenance statement generated for the VM disk image content.
  2. A verifiable boot chain including validating the VM disk in some way before using it.
  3. The build platform to perform a remote attestation to validate that the above is in a healthy state.
  4. The ability for any user of the build platform to perform their own remote attestation of the VM state with any third party of their choosing.

I think that the important property is that there is a trusted third party who is standing behind the attestation, not that it's "hardware attested."

This is perhaps a quibble in wording? The hardware is attesting the current state of the machine depending on what has been written into its attestable measurement registers (e.g. PCRs). A first or third party is necessarily involved in that attestation dance and can then create their own attestation that the state of the hardware is valid. The proposal title perhaps makes it seem focused on the former, when both are actually in scope. Do you have a recommendation for a concise title that would make this nuance more clear?

@mswilson
Copy link

mswilson commented Feb 1, 2024

I disagree with this. This definition would allow anyone to provide any sort of attestation, with no ability to verify anything, and we take it on trust. This would be akin to a signature on a container image, a black box representation of trust. I see the important property as the increase in transparency about what is being attested to, and the ability for an external party to verify it. In this way, we reduce the level of trust to a third party to the smallest scope possible. That scope is detailed in the threat model.

Any signature produced has to be taken on trust. How trust is established can be done many ways. If you are trusting a signature rooted in SGX, or SEV, or a TPM, or something else, you have to trust that the full system has a sound design, and that the implementation maintains all of the properties required to have confidence in the attestation.

You have to trust that the materials used to produce cryptographic attestation are sufficiently protected, and in practice we've seen systems where this has not held true over time. https://www.youtube.com/watch?v=mqma65eRYbo

Nitro Enclaves provide an ability for AWS to make an attestation about the image and configuration used by an instance that provisions an enclave. Some would argue that Nitro Enclave attestations are not "hardware attested" because that attestation does not surface to the user a "hardware root of trust."

Performing builds in a Nitro Enclave are more naturally hermetically sealed (there's no I/O other than what is permitted via a vsock connection) and minimized TCB. I think it would be a shame if Nitro Enclave attestations weren't considered "a high bar" merely because they are not generally marketed as "hardware attested."

@mswilson
Copy link

mswilson commented Feb 1, 2024

This is perhaps a quibble in wording? The hardware is attesting the current state of the machine depending on what has been written into its attestable measurement registers (e.g. PCRs). A first or third party is necessarily involved in that attestation dance and can then create their own attestation that the state of the hardware is valid. The proposal title perhaps makes it seem focused on the former, when both are actually in scope. Do you have a recommendation for a concise title that would make this nuance more clear?

Perhaps "trusted independently attested compute environments"? Too wordy?

A point I'm trying to make is that there are compute environments where hardware details are abstracted away from the user of the resource, and that is not a bad thing when it comes to building a high-trust system. Using Nitro Enclaves requires that you trust AWS to dutifully implement the isolation, protection, and attestation features of the product, "as advertised". Using Intel SGX requires the same "leap of trust" from my perspective.

@marcelamelara
Copy link
Contributor

marcelamelara commented Feb 2, 2024

@mswilson I appreciate your perspective on the nuance of AWS Nitro vs an Intel SGX, for example. One of the challenges we're trying to address in this proposal is the variety of TEEs, so I would really like to be able to make sure we're capturing AWS Nitro in our model and requirements as well. I do want to note that this proposal primarily targets implementers/deployers of build infrastructure, so in the ideal case, the tenant of the build platform shouldn't have to directly interact with the specific underlying compute platform in any case, whether it's an Intel TDX TD or an AWS Nitro Enclave.

Perhaps "trusted independently attested compute environments"? Too wordy?

I might be amenable to renaming the proposed Build Track level to something like "Attested Build Platforms" (i.e., dropping the "hardware"), but do want to emphasize that we are seeking to reduce trust in the build platform, and have much more than just the compute platform be verifiable via (hardware-rooted) attestation.

@henkbirkholz
Copy link

Sorry for only now successfully following @pdxjohnny shiny crumbs ✨ Better late than never.

I read the comments on this issue. And as there is a lot I'll start eclectically.

@MarkLodato "reducing" the burden of proving a single Bs trustworthiness (I think you call that "shrinking the size of a TCB") via the use of consensus protocols is a fine approach. But it does not effect the trustworthiness of a given TCB. Consensus protocols simply increase assurances on trustworthiness w.r.t. a B

@chkimes I assume with "validate build environments" you mean to produce trusted Attestation Results that reflect the believability w.r.t. the authenticity of a produced A and in consequence the believability that a given B produces trustworthy As.

In general, what I read from this issue is that the intent is to create "stronger guarantees that the build platform is not compromised" (I'd use "assurances" instead of "guarantees").

Just including Evidence (in this issue often referred to as "attestation") produced by a B via Roots of Trust for Reporting, such as TEEs, TPMs, or whatnot does not yield a lot, if you want to assess trustworthiness. There are a lot of additional system components required to produce an Attestation Result in order to assess effectively applications requirements (in this case production of trustworthy and authentic As). Did you consider the production of Attestation Results based on the Evidence produced by Bs?

@henkbirkholz
Copy link

@chkimes

  1. The build platform to perform a remote attestation to validate that the above is in a healthy state.

Are there any tangible plans how to realize that? The CCC Attestation SIG might be a good place to start as there are a lot of similar interested parties active there.

@henkbirkholz
Copy link

@mswilson

Remote Attestation is based on endorsed (that means NIST's 1st party and 3rd party Attestation) roots of trust. Trusting a RoT's trustworthiness is a decision (that is based to a big extend on their accompanying Endorsements). If that trust relationship can be established via policy, then it becomes unnecessary to "have to trust that the full system has a sound design" as remote attestation procedures exist to provide you with the outcome of exactly that appraisal, As soon as a trustworthiness assessment of a B is produced via remote attestation (an Attestation Result), you can assume secure interaction with that B. If one or more components in the system are not sound or are not covered by your policy, references, and endorsements, the full system is automatically appraised as not sound in your sense. lf B changes, at which point the Attestation Result and the corresponding Evidence it is based on become stale and fresh Evidence of the different B is required. In composite systems a "change" might also be the use of a different "hardware platform". If you expect a lot of changes of the underlying hardware below your relatively stable virtual layers, maybe, for example, TCG DICE is more useful than TCG TPM in some cases.

@henkbirkholz
Copy link

"trusted independently attested compute environments"

Please mind that "attestation" has two very different meanings and is pretty much always confused now that NIST came up with a second definition:

https://csrc.nist.gov/glossary/term/attestation

@arewm
Copy link
Member

arewm commented Mar 21, 2024

@marcelamelara , what is the latest on this track? The proposal in the initial document is out of date as I think I recall the most recent discussion in a SLSA call being that there would be a new track with only one level (well, L0 and L1). Is there a reference to the proposal in advance of making a PR to SLSA?

@marcelamelara
Copy link
Contributor

Thanks for the ping on this @arewm . Yes, we haven't updated the Doc, we've been quite bogged down with prepping a talk for OSS NA '24 on this topic. Given that our Google Doc is in a similar place as the Source Track's with many comments that aren't immediately actionable, we're likely going to freeze the Doc and open that PR directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
workstream Major effort comprising multiple sub-issues
Projects
Status: 🆕 New
Development

No branches or pull requests

8 participants