Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Fix SHA for go-runner manifest - bump version" #855

Merged
merged 1 commit into from
May 8, 2020

Conversation

listx
Copy link
Contributor

@listx listx commented May 8, 2020

This reverts commit 97278f1.

Here is the sequence of events. Originally, #849 incorrectly promoted
sha256:536ab131b0d0e3b13eb83c985cc0ac9ba7e69e7dac9521e6cacd6a8b6019e0a6
to v0.1.0. This was in error because this digest was not the digest of
the manifest list, but only for the amd64 image.

Then, PR #850 was created to fix this. Originally, that PR assigned the
correct digest of the manifest list
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3
to a new tag, v0.1.1. This was OK for the promoter (it was what I
LGTM'ed), because it assigned a new tag for a new digest; however it was
NOT OK because the underlying image was really versioned as
v0.1.0 (kubernetes/kubernetes#90804). Later,
PR #850 was changed so that
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3
was tagged as v0.1.0 while a new build of v0.1.1 was created here
kubernetes/kubernetes#90852.

This change to promote
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3
into the existing v0.1.0 tag resullted in PR #850 becoming a NOP.
The promoter simply ignored this PR because tag moves are not supported.
That is, in order to honor the intent of PR #850, the promoter would
have had to delete the v0.1.0 tag from the incorrect digest
sha256:536ab131b0d0e3b13eb83c985cc0ac9ba7e69e7dac9521e6cacd6a8b6019e0a6
and reassign this tag to
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3.
This is why the promoter complained about this in PR #850's postsubmit
run about tag moves, and is still complaining about this even after
PR #853 (promoting the newly-built v0.1.1 image) was merged, like this:

...
tag 'v0.1.0' in dest points to sha256:536ab131b0d0e3b13eb83c985cc0ac9ba7e69e7dac9521e6cacd6a8b6019e0a6, not
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3 (as per  the manifest),
but tag moves are not supported; skipping
...

Our promoter manifests should be kept free of impossible-to-do intent.
The solution is to either (1) delete the existing v0.1.0 tag from
production (making the intent not a tag move, but a tag add) and keep
the promoter manifest as-is, or (2) revert PR #850 and keep the
incorrect digest for v0.1.0 to silence the tag move warning. Because
this is not a production emergency, (1) would be against policy. Hence
this PR, which opts for option (2).

That being said, the dry run of the promoter in PR #850 did correctly
detect the tag move as well, but exited without an error. We should exit
with an error on tag moves during dry runs in the future, because tag
moves result from prohibited manifest intent. This is tracked here:
kubernetes-sigs/promo-tools#212.

/cc @dims @justaugustus

This reverts commit 97278f1.

Here is the sequence of events. Originally, kubernetes#849 incorrectly promoted
sha256:536ab131b0d0e3b13eb83c985cc0ac9ba7e69e7dac9521e6cacd6a8b6019e0a6
to `v0.1.0`. This was in error because this digest was not the digest of
the manifest list, but only for the amd64 image.

Then, PR kubernetes#850 was created to fix this. Originally, that PR assigned the
correct digest of the manifest list
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3
to a new tag, `v0.1.1`. This was OK for the promoter (it was what I
LGTM'ed), because it assigned a new tag for a new digest; however it was
NOT OK because the underlying image was really versioned as
`v0.1.0` (kubernetes/kubernetes#90804). Later,
PR kubernetes#850 *was changed* so that
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3
was tagged as `v0.1.0` while a new build of `v0.1.1` was created here
kubernetes/kubernetes#90852.

This change to promote
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3
into the __existing__ `v0.1.0` tag resullted in PR kubernetes#850 becoming a NOP.
The promoter simply ignored this PR because tag moves are not supported.
That is, in order to honor the intent of PR kubernetes#850, the promoter would
have had to __delete__ the `v0.1.0` tag from the incorrect digest
sha256:536ab131b0d0e3b13eb83c985cc0ac9ba7e69e7dac9521e6cacd6a8b6019e0a6
and reassign this tag to
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3.
This is why the promoter complained about this in PR kubernetes#850's postsubmit
run about tag moves, and is still complaining about this even after
PR kubernetes#853 (promoting the newly-built `v0.1.1` image) was merged, like this:

```
...
tag 'v0.1.0' in dest points to sha256:536ab131b0d0e3b13eb83c985cc0ac9ba7e69e7dac9521e6cacd6a8b6019e0a6, not
sha256:8ee20934e6c005a9ce8d6d8b7ed23698c3bb80e0b30a3d49e5aeca928cc69bf3 (as per  the manifest),
but tag moves are not supported; skipping
...
```

Our promoter manifests should be kept free of impossible-to-do intent.
The solution is to either (1) delete the existing `v0.1.0` tag from
production (making the intent not a tag move, but a tag add) and keep
the promoter manifest as-is, or (2) revert PR kubernetes#850 and keep the
incorrect digest for `v0.1.0` to silence the tag move warning. Because
this is not a production emergency, (1) would be against policy. Hence
this PR, which opts for option (2).

That being said, the dry run of the promoter in PR kubernetes#850 did correctly
detect the tag move as well, but exited without an error. We should exit
with an error on tag moves during dry runs in the future, because tag
moves result from prohibited manifest *intent*. This is tracked here:
kubernetes-sigs/promo-tools#212.
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels May 8, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: listx

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/release-eng Issues or PRs related to the Release Engineering subproject approved Indicates a PR has been approved by an approver from all required OWNERS files. sig/release Categorizes an issue or PR as relevant to SIG Release. wg/k8s-infra labels May 8, 2020
@dims
Copy link
Member

dims commented May 8, 2020

thanks @listx !! sorry

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 8, 2020
@dims
Copy link
Member

dims commented May 8, 2020

+1 to "We should exit with an error on tag moves during dry runs"

@k8s-ci-robot k8s-ci-robot merged commit 75ef1c8 into kubernetes:master May 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/release-eng Issues or PRs related to the Release Engineering subproject cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/release Categorizes an issue or PR as relevant to SIG Release. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants