You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. What kops version are you running? The command kops version, will display
this information.
$ kops version
Client version: 1.25.3 (git-v1.25.3)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
4. What commands did you run? What is the simplest way to reproduce this issue?
We ran kops rolling-update cluster <cluster_name> --yes --cloudonly to re-create the nodes. The new master node that came up didn't have a running kube-scheduler pod.
5. What happened after the commands executed?
New nodes (master and worker) came up but master node stayed in NotReady status as CNI configuration was not found in /etc/cni/net.d directory. On further investigation, it was found that kube-scheduler pod was not running.
6. What did you expect to happen?
Master node will come up automatically in a healthy, functioning state with all kube-system pods running.
7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
Had to remove both --policy-configmap-namespace=kube-system and --policy-configmap=scheduler-policy to get kube-schduler pod to run. The manifest after changing is:
@SohamChakraborty Thank you for reporting this issue. The fix should be part of the future releases.
Please also remove kubeScheduler.usePolicyConfigMap from your config. That should fix the problem long term.
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
We ran
kops rolling-update cluster <cluster_name> --yes --cloudonly
to re-create the nodes. The new master node that came up didn't have a runningkube-scheduler
pod.5. What happened after the commands executed?
New nodes (master and worker) came up but master node stayed in
NotReady
status as CNI configuration was not found in/etc/cni/net.d
directory. On further investigation, it was found thatkube-scheduler
pod was not running.6. What did you expect to happen?
Master node will come up automatically in a healthy, functioning state with all kube-system pods running.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
State of
kube-scheduler
pod:Logs showed this at the end:
The
kube-scheduler
manifest was this:Had to remove both
--policy-configmap-namespace=kube-system
and--policy-configmap=scheduler-policy
to getkube-schduler
pod to run. The manifest after changing is:The text was updated successfully, but these errors were encountered: