-
-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure Flannel networking tasks fails on Ubuntu 20.04 #115
Comments
I have verified that all nodes have the same |
A shot in the dark, but I added a full package update and reboot to see if that solved the issue. It was unsuccessful. ---
- hosts: kube
become: true
handlers:
- name: reboot
reboot:
pre_tasks:
# adding to see if updating all packages will resolve the issue of
# Configure Flannel networking task failing on worker nodes.
- name: update all packages # noqa 403
apt:
name: '*'
state: latest
update_cache: true
notify: reboot
# ensure handlers are flushed before moving on to geerlingguys roles.
- name: flush handlers
meta: flush_handlers
# Geerlingguy's roles per Ansible for Kubernetes page 77 (2021Sep30).
roles:
- geerlingguy.security
- geerlingguy.docker
- geerlingguy.swap
- geerlingguy.kubernetes |
I'm thinking maybe this issue should be in https://github.com/geerlingguy/ansible-for-kubernetes? |
I ran an ansible ad-hoc command to get all of the
I did a diff across all the files and saw that I was mistaken. The root@node05:~# kubectl get nodes
error: You must be logged in to the server (Unauthorized) |
Replaced all worker nodes with the exact same #!/bin/bash
for i in 2 3 4 5
do
ansible -m copy -a "src=/tmp/fetch/node01/etc/kubernetest/admin.conf dest=/etc/kubernetes/admin.conf" -i inventory/hosts.yml all -b --limit node0$1
done Then ran the playbook again and was met with a completed execution but still only seeing node01 when I check on all the nodes:
|
I completely missed the all:
children:
kube:
children:
kubemaster:
kubeworker:
kubemaster:
hosts:
node01:
kubeworker:
hosts:
node0[2:5]:
---
# Kubernetes master configuration.
kubernetes_role: master
---
# Kubernetes worker configuration.
kubernetes_role: node I reran the playbook and logged back in node one and I could see all of the worker nodes in the cluster! 🎉 root@node01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01.test.local Ready control-plane,master 60s v1.20.11
node02.test.local Ready <none> 32s v1.20.11
node03.test.local Ready <none> 33s v1.20.11
node04.test.local Ready <none> 33s v1.20.11
node05.test.local Ready <none> 31s v1.20.11 Sorry for the confusion and opening up a ticket unnecessarily. Thanks for all the work you do! |
I'm following along in the Ansible for Kubernetes book to stand up a 5 node cluster. The cluster is running on Ubuntu 20.04 across the board. Node 1 (master) completes this task just fine, however all 4 worker nodes fail on this task with the following:
The only override vars I'm using are as follow:
I will add any further info here as I continue to troubleshoot this issue.
The text was updated successfully, but these errors were encountered: