-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy 21m14s #41889
Comments
I did not see this happening until recently. |
The controller misses that the first pod transitioned to unready and proceeds to create the second pod. The test uses a client to hit the api server and will scale up the statefulset as soon as the broken pod is observed but the controller uses caches so it may observe the scaled up statefulset quicker than the updated pod. |
I will update the test to ensure that we wait until the unreadiness is observed on StatefulSet.Status.Replicas field prior to the scaling operation |
SGTM |
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310) Fix StatefulSet e2e flake **What this PR does / why we need it**: Fixes StatefulSet e2e flake by ensuring that the StatefulSet controller has observed the unreadiness of Pods prior to attempting to exercise scale functionality. **Which issue this PR fixes** fixes #41889 ```release-note NONE ```
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/41821/pull-kubernetes-e2e-gce-gci/18753/
go run hack/e2e.go -v -test --test_args='--ginkgo.focus=StatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy$'
@ncdc do you know the statefulset owners?
The text was updated successfully, but these errors were encountered: