New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1865998: Tolerate multiple package manifests with the same name #6225
Bug 1865998: Tolerate multiple package manifests with the same name #6225
Conversation
@spadgett: This pull request references Bugzilla bug 1865998, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Note this is only a partial fix. Links to package manifest details pages will still be ambiguous since we rely on name + namespace to be unique in the URL. |
/cherry-pick release-4.5 |
@spadgett: once the present PR merges, I will cherry-pick it on top of release-4.5 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
6a812d2
to
7c5f468
Compare
/assign @TheRealJon @andrewballantyne |
This change partially works around upstream OLM bug: https://bugzilla.redhat.com/show_bug.cgi?id=1814822 Generate a unique key for package manifests in our k8s reducer when name and namespace aren't unique.
7c5f468
to
54e0671
Compare
/retest |
/lgtm If the tests pass I think we are good to go. It appears to have addressed the issue. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andrewballantyne, spadgett The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
It got past the original failure, although failed a later test. I'll dig into this once the artifacts are captured. It might be a different flake.
|
/retest |
Hoping it's an unrelated flake? |
I think it's an unrelated related flake. I'm testing locally now. |
It works locally for me (both testing manually and running the protractor tests). I have a suspicion the resource updated in the background while the test was running, which prevented the test from saving the YAML. But I'm not sure. |
I'm not an expert on this, but I thought you have referenced screenshots before when tests fail? Like the page as-is when the test failed. |
/retest Please review the full test history for this PR and help us cut down flakes. |
Yeah, screenshot is here: It's under Artifacts -> e2e-gcp-console -> gui_test_screenshots from the test details page. |
@spadgett: All pull requests linked via external trackers have merged: openshift/console#6225. Bugzilla bug 1865998 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@spadgett: new pull request created: #6237 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Digging into the YAML flake more, I've confirmed the resource was updated in the background:
I'm tempted to remove the lines that save the YAML editor in the OLM tests. That's already well-covered by the CRUD tests and doesn't seem specific to OLM. I'm surprised there isn't an error message in the screenshot, though. |
Sounds like overlap that adds to the flakes. I think we don't need tests for this YAML save ... not sure it adds any quality to our tests.
I was too - figured maybe the error got replaced with the info or something haha. |
That's exactly it! Here's how to reproduce manually:
The error goes away from tab 1. I'm almost certain that's what happened in the tests, in which case we got unlucky. And things are working as expected. I'm not sure if there's a good way to fix this other than removing the save from the test. Arguably it's a bug that we clear the error on background updates, though. |
I guess an alternate fix is to always click Reload before Save if we want to keep the test. |
I think that will just reduce the flake count... not eliminate it. I don't think there is anything specific to the spec of a resource that says it cannot update back to back because of some criteria is deems is needed. |
Imo, definitely a bug :) Clearing errors is not something I think we should do until another submit is triggered. Submit errors are things that I think should remain around even if the data reloads as the user is the only one who really has control over how fast they consumed that error message and understood it... so programmatically removing it I feel is a bad UX. |
I opened https://bugzilla.redhat.com/show_bug.cgi?id=1866875 for the error message getting cleared. |
Generate a unique key for package manifests in our k8s reducer when name and namespace aren't unique.