New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix flaky test for services that shouldn't be available when PublishNotReadyAddresses is false #121588
Conversation
Welcome @vlasebian! |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @vlasebian. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
replace |
Replaced! |
…otReadyAddresses is false
4cf8992
to
feb0e2f
Compare
/test pull-kubernetes-linter-hints |
@vlasebian: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi, @aojea! I checked the failing job, but I am a bit stuck. From what I see in the job details, the pause-pod-1 was not created because of a reserved name error:
Is there something wrong in the PR? Is there some kind of clean up that needs to be done so that name is available for the pod? Or is it okay to just retrigger the job? |
the other jobs are passing and this job has a lot of failures that are related to creating pods
/lgtm |
LGTM label has been added. Git tree hash: 84b5d9ffdfba72304440710bb8eb78ed87bbd26c
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea, vlasebian The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
/kind flake
What this PR does / why we need it:
Fixes a flaky network e2e test that checks the ability to connect to terminating and unready endpoints if PublishNotReadyAddresses is false. The fix, as described in the issue, checks if the rules are programmed on both the nodes that are used in the test, not only on one.
Which issue(s) this PR fixes:
Fixes #121209
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: