-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e: Don't try to create a UDP LoadBalancer on AWS #20959
Conversation
Labelling this PR as size/M |
GCE e2e test build/test passed for commit f5962c152e568e7b8add8fbf265b06ca57796e59. |
Still running these tests against AWS... please don't LGTM just yet! |
GCE e2e test build/test passed for commit 091a7ea23e8787bf5dbc7300ada9428b1fea5953. |
cc @thockin |
LGTM. Justin, feel free to self-lgtm when you think it's done. let me know if you want another round of review for some reason :) |
AWS doesn't support type=LoadBalancer with UDP services. For now, we simply skip over the test with type=LoadBalancer on AWS for the UDP service. Fix kubernetes#20911
Further AWS e2e testing revealed that I had to add two more conditionals for some cases I missed (where it verified the previous state). But passing on AWS now. Also we were using the wrong timeout after the initial creation of the TCP LB (LagTimeout vs CreateTimeout); I also added a note about the fact that the create timeout is conflating two concepts: https://github.com/kubernetes/kubernetes/pull/20959/files#diff-20a4e2095b63ecd60dd25e78bcd67372R55 It only surfaces on AWS because there's a delay after the LB is created and appears in the API before it actually starts working (DNS propagation, I think). Given all that, would you mind giving it a quick once-over again @thockin ? I think it's fine, but don't feel entirely right doing a self-LGTM given that the extra changes aren't 100% trivial. 95% trivial, but not 100%... |
GCE e2e test build/test passed for commit a3558fa40c47213f854aedf21fdfb8e3e1786d2f. |
@@ -52,6 +52,10 @@ const loadBalancerLagTimeout = 2 * time.Minute | |||
|
|||
// How long to wait for a load balancer to be created/modified. | |||
//TODO: once support ticket 21807001 is resolved, reduce this timeout back to something reasonable | |||
//TODO: this timeout is actually used in two distinct senses; once when waiting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not quite right - I used it again after changing the port number because that operation (on GCE) causes a full rebuild of the LB, so is akin to a create operation.
only comments about the timeout value |
Changed the timeout commit to just use a longer lag-timeout on AWS for load balancers (10 minutes). You were right @thockin (of course) - the timeouts were set up correctly; just a little too short for the slow clouds :-) |
Labelling this PR as size/L |
GCE e2e build/test failed for commit 69f8a37e22507f5fb1fd316d493c277cf1bcfa9c. |
LGTM |
self-lgtm-ing, with thockin's approval on slack. |
@k8s-bot test this Tests are more than 48 hours old. Re-running tests. |
GCE e2e test build/test passed for commit 46b8946. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test failed for commit 46b8946. |
GCE e2e test build/test passed for commit 46b8946. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit 46b8946. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test failed for commit 46b8946. |
GCE e2e test build/test passed for commit 46b8946. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit 46b8946. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit 46b8946. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
UPSTREAM: 68023: Replace git volume with configmap in emptydir wrapper conflict test Origin-commit: 9e3b962ac61ae5425031a5bc08cf8b8fbfc72378
Instead of assuming that the AWS default applies to all providers, as we've done since we carried the upstream code in 9732d01 (test: Add a service upgrade test that verifies availability, 2020-01-28, openshift#24471). This catches our copied-and-tweaked code up with upstream logic like that has switched on provider since kubernetes/kubernetes@46b89464fd ( e2e: Allow longer for AWS LoadBalancers to start serving traffic, 2016-02-10, kubernetes/kubernetes#20959).
…nSuccessCount Instead of assuming that the AWS default applies to all providers, as we've done since we carried the upstream code in 9732d01 (test: Add a service upgrade test that verifies availability, 2020-01-28, kubernetes/kubernetes@46b89464fd ( e2e: Allow longer for AWS LoadBalancers to start serving traffic, 2016-02-10, kubernetes/kubernetes#20959), but Clayton doesn't want to depend on an upstream constent that might be grown later without us noticing. Hard-coding to 10m, which should be slow enough for all providers and is decoupled from upstream constants.
…nSuccessCount Instead of assuming that the AWS default applies to all providers, as we've done since we carried the upstream code in 9732d01 (test: Add a service upgrade test that verifies availability, 2020-01-28, kubernetes/kubernetes@46b89464fd ( e2e: Allow longer for AWS LoadBalancers to start serving traffic, 2016-02-10, kubernetes/kubernetes#20959), but Clayton doesn't want to depend on an upstream constent that might be grown later without us noticing. Hard-coding to 10m, which should be slow enough for all providers and is decoupled from upstream constants.
…nSuccessCount Instead of assuming that the AWS default applies to all providers, as we've done since we carried the upstream code in 9732d01 (test: Add a service upgrade test that verifies availability, 2020-01-28, kubernetes/kubernetes@46b89464fd ( e2e: Allow longer for AWS LoadBalancers to start serving traffic, 2016-02-10, kubernetes/kubernetes#20959), but Clayton doesn't want to depend on an upstream constent that might be grown later without us noticing. Hard-coding to 10m, which should be slow enough for all providers and is decoupled from upstream constants.
AWS doesn't support type=LoadBalancer with UDP services. For now, we
simply skip over the test with type=LoadBalancer on AWS for the UDP
service.
Fix #20911