You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After updating cluster spec with authentication parameters (kubeAPIServer.admissionControl), rolling-update fails right on the first master being updated.
[...]
I0416 15:51:31.284819 29901 instancegroups.go:273] Cluster did not pass validation, will try again in "30s" until duration "15m0s" expires: kube-system pod "calico-node-5w2lp" is not ready (calico-node).
I0416 15:52:00.575052 29901 instancegroups.go:273] Cluster did not pass validation, will try again in "30s" until duration "15m0s" expires: kube-system pod "calico-node-5w2lp" is not ready (calico-node).
I0416 15:52:30.055031 29901 instancegroups.go:273] Cluster did not pass validation, will try again in "30s" until duration "15m0s" expires: kube-system pod "calico-node-5w2lp" is not ready (calico-node).
E0416 15:52:58.178054 29901 instancegroups.go:214] Cluster did not validate within 15m0s
master not healthy after update, stopping rolling-update: "error validating cluster after removing a node: cluster did not validate within a duration of \"15m0s\""
The new master join the cluster, but calico-node never gets ready. Readiness check says:
Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 10.x.x.x,10.x.x.x,10.x.x.x
Log doesn't show any ERROR message, only INFO, though.
2019-04-16 19:00:42.229 [INFO][42] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}
Deleting the pod and letting it be created again seems to solve the problem.
Any ideas?
The text was updated successfully, but these errors were encountered:
1. What
kops
version are you running?Version 1.12.0-beta.2 (git-d1453d22a)
2. What Kubernetes version are you running?
1.12.7
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
5. What happened after the commands executed?
Rolling update fails at the first node (master instance), due to calico-node not becoming ready.
6. What did you expect to happen?
Rolling-update to get completed without errors.
7. Please provide your cluster manifest.
The issue at a glance:
kubeAPIServer.admissionControl
), rolling-update fails right on the first master being updated.The new master join the cluster, but
calico-node
never gets ready. Readiness check says:Log doesn't show any ERROR message, only INFO, though.
Deleting the pod and letting it be created again seems to solve the problem.
Any ideas?
The text was updated successfully, but these errors were encountered: