Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chapter 4 - Building a K8s cluster with Ansible #57

Closed
SteveJM opened this issue Jun 22, 2020 · 3 comments
Closed

Chapter 4 - Building a K8s cluster with Ansible #57

SteveJM opened this issue Jun 22, 2020 · 3 comments

Comments

@SteveJM
Copy link
Sponsor

SteveJM commented Jun 22, 2020

When running the cluster build playbook (P. 79), I get the following error:

...
TASK [geerlingguy.kubernetes : Ensure kubelet is started and enabled at boot.] ***************************

fatal: [kube1]: FAILED! => {"changed": false, "msg": "Unable to enable service kubelet: Failed to enable unit: Unit file /etc/systemd/system/kubelet.service is masked.\n"}

fatal: [kube2]: FAILED! => {"changed": false, "msg": "Unable to enable service kubelet: Failed to enable unit: Unit file /etc/systemd/system/kubelet.service is masked.\n"}

fatal: [kube3]: FAILED! => {"changed": false, "msg": "Unable to enable service kubelet: Failed to enable unit: Unit file /etc/systemd/system/kubelet.service is masked.\n"}

This is using the base image pas per the Vagrant file on P. 71) .

As the Ansible role is defined in Ansible Galaxy this looks like something out of my control.

@SteveJM
Copy link
Sponsor Author

SteveJM commented Jun 23, 2020

(Untested) Steps to resolve (these probably apply to ansible-role-kubernetes:
In each host kube1-2:

sudo rm /etc/systemd/system/kubelet.service
sudo systemctl daemon-reload

@geerlingguy
Copy link
Owner

Just for my own reference, this is in chapter 4, "Building a Kubernetes cluster with Ansible" — I'll test out the playbook in this repo on my laptop to see if I can reproduce the error.

@geerlingguy
Copy link
Owner

I just re-ran the build fresh on my laptop and didn't seem to run into an error. I'm guessing it was a transient issue with maybe one of the apt caches or something in the image. I would recommend doing a full vagrant destroy then bringing it up again, and make sure you're on the latest versions of Vagrant, all the roles (ansible-galaxy install --force -r requirements.yml), etc.

The playbook completed successfully:

...
TASK [geerlingguy.kubernetes : Join node to Kubernetes master] *****************
changed: [kube2]
changed: [kube3]

RUNNING HANDLER [geerlingguy.kubernetes : restart kubelet] *********************
changed: [kube3]
changed: [kube1]
changed: [kube2]

PLAY RECAP *********************************************************************
kube1                      : ok=56   changed=29   unreachable=0    failed=0    skipped=18   rescued=0    ignored=0   
kube2                      : ok=47   changed=24   unreachable=0    failed=0    skipped=17   rescued=0    ignored=0   
kube3                      : ok=47   changed=24   unreachable=0    failed=0    skipped=17   rescued=0    ignored=0

And I could connect to the cluster:

$ vagrant ssh kube1
vagrant@kube1:~$ sudo su
root@kube1:/home/vagrant# kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-72pmw        1/1     Running   0          2m12s
kube-system   coredns-6955765f44-gmrp8        1/1     Running   0          2m12s
kube-system   etcd-kube1                      1/1     Running   0          2m24s
kube-system   kube-apiserver-kube1            1/1     Running   0          2m24s
kube-system   kube-controller-manager-kube1   1/1     Running   0          2m24s
kube-system   kube-flannel-ds-amd64-4dk9t     1/1     Running   0          2m9s
kube-system   kube-flannel-ds-amd64-htq4g     1/1     Running   0          2m13s
kube-system   kube-flannel-ds-amd64-l77g2     1/1     Running   0          2m9s
kube-system   kube-proxy-9nnv7                1/1     Running   0          2m9s
kube-system   kube-proxy-hsc5d                1/1     Running   0          2m9s
kube-system   kube-proxy-jsfx5                1/1     Running   0          2m13s
kube-system   kube-scheduler-kube1            1/1     Running   0          2m24s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants