Replies: 2 comments
-
|
Beta Was this translation helpful? Give feedback.
0 replies
-
This was not solely fixed by #36, but also by adding proper tags on Machine objects in 926f008. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
After deploying the machine-controller v0.9.9 to a cluster and then creating a
MachineDeployment
like so:Nothing really happens. The MC is creating a MachineSet and all 5 Machines, but then no instances are spawned on AWS EC2. Killing the MC in the cluster and starting a local one using the current master branch brings the machines up, though. (this problem is tackled in #35)
The machines boot up and create the kubelet-bootstrap services. Unfortunately, the kubelet cannot connect to the masters. The journal logs
This is how the systemd units are configured:
From what I've seen, we need to define a "cluster ID" since Kubernetes 1.10, but I could not find out how. It's supposed to be a label
kubernetes.io/cluster/<ID>=owned
, but this seems to be something either Terraform is supposed to do for the masters and then the MC is supposed to do for the worker instances.This is a sub-task of #4.
Beta Was this translation helpful? Give feedback.
All reactions