New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MCO-540: Adding conditional update "PodmanMissingAuthFlag" for 4.11.* to 4.12.* #3266
MCO-540: Adding conditional update "PodmanMissingAuthFlag" for 4.11.* to 4.12.* #3266
Conversation
355b053
to
e3ef581
Compare
5133c6d
to
3f97e63
Compare
3f97e63
to
363c68b
Compare
…* upgrade paths Like 034fa01 (blocked-edges/4.12.*: Declare AWSOldBootImages, 2022-12-14, openshift#2909) and 957626a , this cnditional risk is sticky, because we don't have PromQL access to boot-image age, so we cannot automatically distinguish between "born in 4.1 and still uses the old boot images" and "born in 4.1, but has subsequently updated boot images". And because of a cluster-version operator bug, we don't necessarily have access to the cluster's born-in release anyway. The CVO bug fix went back via: * 4.11.0 https://bugzilla.redhat.com/show_bug.cgi?id=2097067#c12 * 4.10.24 https://bugzilla.redhat.com/show_bug.cgi?id=2108292#c6 * 4.9.45 https://bugzilla.redhat.com/show_bug.cgi?id=2108619#c6 * 4.8.47 https://bugzilla.redhat.com/show_bug.cgi?id=2109962#c6 * 4.7.59 https://bugzilla.redhat.com/show_bug.cgi?id=2117347#c8 * 4.6.61 https://bugzilla.redhat.com/show_bug.cgi?id=2118489#c6 So it's possible for someone born in 4.1 to have spend a whole bunch of time in 4.9.z and be reporting a 4.9.0-rc.* or something as their born-in version. Work around that by declaring this risk for AWS clusters where the born-in version is 4.9 or earlier, expecting that we'll have this issue fixed soonish, so folks with old boot images will be able to update to a later 4.11, and allowing us to be overly broad/cautious with the risk matching here. Signed-off-by: Lalatendu Mohanty <lmohanty@redhat.com>
363c68b
to
ec2bc7e
Compare
/test e2e-latest-cincinnati |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: LalatenduMohanty, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@LalatenduMohanty: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Like 034fa01 (blocked-edges/4.12.*: Declare AWSOldBootImages,
2022-12-14, #2909) and 957626a , this conditional risk is sticky,
because we don't have PromQL access to boot-image age, so we cannot automatically distinguish
between "born in 4.1 and still uses the old boot images" and "born
in 4.1, but has subsequently updated boot images". And because of
a cluster-version operator bug, we don't necessarily have access to
the cluster's born-in release anyway.
So it's possible for someone born in 4.1 to have spend a whole bunch
of time in 4.9.z and be reporting a 4.9.0-rc.* or something as their
born-in version. Work around that by declaring this risk for AWS
clusters where the born-in version is 4.9 or earlier, expecting that
we'll have this issue fixed soonish, so folks with old boot images
will be able to update to a later 4.11, and allowing us to be overly
broad/cautious with the risk matching here.