-
Notifications
You must be signed in to change notification settings - Fork 41.1k
Closed as not planned
Labels
area/app-lifecyclearea/node-lifecycleIssues or PRs related to Node lifecycleIssues or PRs related to Node lifecyclearea/nodecontrollerarea/stateful-appsarea/usabilitylifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.sig/nodeCategorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to SIG Node.
Description
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
- StatefulSet at scale 1 is created.
- The only pod is placed and running on one of 2 worker nodes.
- The worker node with running pod shuts down and never starts up again.
- The pod never moved to other node.
What you expected to happen:
- The pod would move to other node after expiration of default tolerations "node.alpha.kubernetes.io/notReady:NoExecute for 300s" and "node.alpha.kubernetes.io/unreachable:NoExecute for 300s"
- "kubectl delete pod pod-on-shutdown-node" would induce the expected movement while the node is down -- it did not happen either.
How to reproduce it (as minimally and precisely as possible):
- Create StatefulSet spec with one container and one replica in, say, sset.yml.
- Have kubernetes installation with 2 worker nodes.
- kubectl create -f sset.yml
- kubectl get pod, to check to see where the only pod is scheduled, say, node N.
- shutdown node N with "shutdown -h".
- check to see that the pod did not move to other worker node in 10 minutes after node N halt time.
Anything else we need to know?:
- A Deployment behaves as indicated in the "What you expected to happen" section.
Environment:
- Kubernetes version (use
kubectl version): 1.8.1 - Cloud provider or hardware configuration**: Virtual machines with Vagrant 2.0.0 and VirtualBox 5.1.28-117968 on Intel(R) Xeon(R) CPU E5-2690 v3 24 cores with Ubuntu 16.04 LTS
- OS (e.g. from /etc/os-release): Ubuntu 16.04.3 LTS (VM)
- Kernel (e.g.
uname -a): 4.4.0-96-generic (VM) - Install tools: kubeadm 1.8.1-00
- Others:
Edit: Goal if this issue is to update the documentation and clarify the expected behavior as per: #54368 (comment)
Metadata
Metadata
Assignees
Labels
area/app-lifecyclearea/node-lifecycleIssues or PRs related to Node lifecycleIssues or PRs related to Node lifecyclearea/nodecontrollerarea/stateful-appsarea/usabilitylifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.sig/nodeCategorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to SIG Node.
Type
Projects
Status
Done