Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(backport/1.0): make client shard aware when verifying entries on inactive shards #286

Merged
merged 4 commits into from
Oct 3, 2022

Conversation

asraa
Copy link
Contributor

@asraa asraa commented Oct 2, 2022

No description provided.

@asraa asraa changed the base branch from main to release/v1.0 October 2, 2022 17:47
@asraa asraa changed the title fix: make client shard aware when verifying entries on inactive shards fix(backport/1.0): make client shard aware when verifying entries on inactive shards Oct 2, 2022
Copy link
Member

@ianlewis ianlewis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize it's not completely new, but I think it's a bit unfortunate that rekor clients like slsa-verifier need to have logic for sharding. It feels like clients are being exposed to too many implementation details and that's making things brittle.

main_test.go Outdated
@@ -308,7 +308,7 @@ func Test_runVerify(t *testing.T) {
name: "rekor upload bypassed",
artifact: "binary-linux-amd64-no-tlog-upload",
source: "github.com/slsa-framework/example-package",
err: pkg.ErrorRekorSearch,
err: pkg.ErrorNoValidRekorEntries,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bunch of other tests seem to be generating this error. I'm not sure if that's intentional, but you may need to update other tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Figured it out! It turns out it was a problem in the UUID lengths returned that was hairy:
old entries (like the ones in these tests) were added when the UUID lengths of rekor were 64 chars. that was causing verification errors now when trying to extract tree IDs from full UUIDs (now 80 chars) in order to perform the verifyRootHash that broke when rotating shards.

Now I use the full 80 char UUID that was returned from GetLogEntryByUUID

pkg/provenance.go Outdated Show resolved Hide resolved
pkg/provenance.go Outdated Show resolved Hide resolved
@asraa
Copy link
Contributor Author

asraa commented Oct 3, 2022

I realize it's not completely new, but I think it's a bit unfortunate that rekor clients like slsa-verifier need to have logic for sharding. It feels like clients are being exposed to too many implementation details and that's making things brittle.

I completely agree: sharding was meant to be internal client, but it wasn't: things are getting better (the return UUID sharding comparisons eventually can be removed) but seem like they won't be stable until after GA. It's been a pain to deal with this logic and debug.

I should be able to clean up this logic when GA does give us a stable client though. For e.g. we will no longer need to fetch the root hash and search shards for the correct one: a root hash will be returned with the entry.

I'm hesitant to make some of these changes prematurely though, because rekor keeps rolling back versions :|

Signed-off-by: Asra Ali <asraa@google.com>
Signed-off-by: Asra Ali <asraa@google.com>
Signed-off-by: Asra Ali <asraa@google.com>
@asraa asraa merged commit 191a3bd into slsa-framework:release/v1.0 Oct 3, 2022
@ianlewis
Copy link
Member

ianlewis commented Oct 4, 2022

I should be able to clean up this logic when GA does give us a stable client though. For e.g. we will no longer need to fetch the root hash and search shards for the correct one: a root hash will be returned with the entry.

It's a bit unclear from the outside what is GA and what isn't with regard to Rekor and that translates to slsa-github-generator since we rely on the public Rekor.

I'm hesitant to make some of these changes prematurely though, because rekor keeps rolling back versions :|

Completely understandable. We need to work with what we have currently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants