New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Field minReadySeconds
forces some replicas to wait more than predefined threshold
#101319
Comments
@rafaellima: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
minReadySeconds
forces PODS to wait more than predefined thresholdminReadySeconds
forces some replicas to wait more than predefined threshold
/sig apps |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Another data point: we are running into this issue in our production environment on EKS ( |
What happened:
According to the official kubernetes deployment docs, the field
.spec.minReadySeconds
has the following definition:When I create a new deployment with
minReadySeconds
set (let's say 60 seconds), and this deployment has more than one replica and a readiness probe configured, I expect a POD to be available 60 seconds after the Readiness probe is successful. However, the scenario I encountered is that after the first POD is considered available, all the other PODS take the value ofminReadySeconds
to be considered healthy.I started looking in the source code to understand this behavior, and I've found this piece of code in the replicaset-controller that could be causing the issue. From what I could understand, it checks the availability of all PODS, re-enqueues the replicaset to run again via the
rsc.queue.AddAfter
using theminReadySeconds
as a base. That implies that if only one POD is available, all the others only get checked afterminReadySeconds
interval.What you expected to happen:
All replicas of a deployment being available after the transition to
Ready
plus what is defined inminReadySeconds
.How to reproduce it (as minimally and precisely as possible):
Use this deployment manifest:
Run the following commands:
This produces the following output:
The time between the last 2 lines is exactly as what is defined in
minReadySeconds
, even though the remaining PODS are eligible for being available before that time.Anything else we need to know?:
The issue is reproducible using kind in my local machine, but it happens in our production environment running in a cloud provider using linux.
Environment:
kubectl version
):cat /etc/os-release
): OSX - Big Sur 11.2.3uname -a
):The text was updated successfully, but these errors were encountered: