-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flaky e2e: Kubectl client Kubectl expose should create services for rc #14078
Comments
@jlowdermilk Someone on kubectl team to investigate and fix up? |
Create timeout(s) probably needs some adjusting. Looking into it. |
It's essentially the same as Services.*expose, just using kubectl instead of client lib. Fixes kubernetes#14078
I'd suggest that we leave this one open to track debugging of the failures. Declaring it flaky doesn't fix the problem :-) |
Fair enough. Note that it's only flaky when run in parallel. The non-parallel test runs are 100% green. Reprioritizing accordingly. |
It's essentially the same as Services.*expose, just using kubectl instead of client lib. Fixes kubernetes#14078
After re-adding to parallel suite, this has been 100% stable |
Seems to be failing on ~10% of PRs:
http://kubekins.dls.corp.google.com:8081/job/kubernetes-pull-build-test-e2e-gce/8733/testReport/junit/%28root%29/Kubernetes%20e2e%20suite/Kubectl_client_Kubectl_expose_should_create_services_for_rc/history/?start=75
Appears to wait < 1 second for the command to succeed, e.g.
06:05:15 STEP: exposing RC
06:05:15 Sep 16 06:03:59.693: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://104.197.153.46 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e-gce/workspace/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-pwnje'
06:05:15 Sep 16 06:03:59.923: INFO: service "rm2" exposed
06:05:15
06:05:15 Sep 16 06:04:04.928: INFO: Service rm2 in namespace e2e-tests-kubectl-pwnje found.
06:05:15 Sep 16 06:04:04.930: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:09.983: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:14.997: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:20.000: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:25.004: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:30.007: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:35.017: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:40.394: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:45.397: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:50.400: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:04:55.413: INFO: No endpoint found, retrying
06:05:15 Sep 16 06:05:00.450: INFO: No endpoint found, retrying
06:05:15 [AfterEach] Kubectl client
06:05:15 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:85
06:05:15 STEP: Destroying namespace for this suite e2e-tests-kubectl-pwnje
06:05:15
06:05:15
06:05:15 • Failure [102.315 seconds]
06:05:15 Kubectl client
06:05:15 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:663
06:05:15 Kubectl expose
06:05:15 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:424
The text was updated successfully, but these errors were encountered: