-
Notifications
You must be signed in to change notification settings - Fork 4
Should we store RekordEntry w/ PEM and Sig files? #5
Comments
would it be only the |
We can take a look at sigstore/cosign#1193 and the spec that goes along with it: https://github.com/sigstore/cosign/blob/main/specs/SIGNATURE_SPEC.md. I suppose we should likely do the same thing. The bundle that can optionally be produced by cosign looks like this:
|
nice find! Now I have a better idea what that |
I think it's another file in addition to cert+sig, even though it contains cert+sig? |
This is the Rekor as a witness vs Rekor as a witness + storage conversation. You want to minimally trust Rekor, only using it to be a witness to an event, rather than also trusting it to provide the message contents too. The log can be verified and audited, but Rekor could modify the message contents before entry, providing its own certificate chain + sig for example. This could only be detected if you also persist the certificate chain and signature also. |
Hey Hayden, just trying to process what you're saying as a bit of a
sigstore novice. Is it that offline verification is less trusty and we
should avoid it when possible?
I wasn't thinking that we'd *not* store the records. Just that we'd keep
all the data necessary to do verification with the artifact, which I think
is what cosign does for OCI artifacts as well?
…On Thu, Jun 2, 2022 at 2:12 PM Hayden B ***@***.***> wrote:
This is the Rekor as a witness vs Rekor as a witness + storage
conversation. You want to minimally trust Rekor, only using it to be a
witness to an event, rather than also trusting it to provide the message
contents too. The log can be verified and audited, but Rekor could modify
the message contents before entry, providing its own certificate chain +
sig for example. This could only be detected if you also persist the
certificate chain and signature also.
—
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB37SHLY4LW75OYDKLBRLF3VND2PFANCNFSM5XSLMQDA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Sorry, I might not have answered the original question. tl;dr Offline verification is just as secure as online verification. For online verification, given an artifact, its verification material (key or cert), and a signature, we can find that entry in the transparency log and verify an inclusion proof presented by the log. You must already have Rekor's verification key. The process looks something like this:
For offline verification, the offline bundle is the artifact hash, certificate, signature, and timestamp of inclusion, signed by the log. To verify this, you must have the log's verification key, same with online verification. You should also verify the contents of the bundle matches what you stored for the certificate, signature, and artifact hash. The trust model is the same between offline and online - You trust the log to not lie about inclusion (which can be verified by auditors). This mirrors Cosign's OCI image annotations - Store the certificate, its signature, and the bundle alongside the artifact. The comment I made was unrelated, it's just something that's come up before, whether or not to use Rekor as storage. |
We will want Central to verify signatures as part of the publication process, and expect Central to host signature validation materials for client usage. It is ideal if these validation materials are provided by publishers, but it is certainly possible for Central to retrieve / assemble those if they are not provided. The Shopify / Python RFC articulates the same approach:
|
Question
Jason Swank mentioned in the OSSF Software Packages meeting that he was more concerned about verification load for multi-artifact releases then creating entries. It got me thinking that offline verification requires the Rekor signed entry timestamp to be stored locally. Should we be publishing those records to Central with sig and pems to enable offline verification?
The text was updated successfully, but these errors were encountered: