Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test and make work with new 64-bit Raspberry Pi OS #36

Closed
geerlingguy opened this issue May 29, 2020 · 10 comments
Closed

Test and make work with new 64-bit Raspberry Pi OS #36

geerlingguy opened this issue May 29, 2020 · 10 comments

Comments

@geerlingguy
Copy link
Contributor

From a commenter on on of my YouTube videos:

something I noticed is that using k3s-ansible on the beta 64bit OS didn't work, it was missing the k3s binary and can't find it .. did you faced the same? and if so how did you fixed that

(see comment).

I've been slowly working through testing some of my own automation on the new 64-bit version of the Pi OS, and I've found that some images and binaries have to be downloaded differently based on the arch (which, in the past, I always assumed was armv7 or arm32 on Raspbian, which is not necessarily true as of yesterday).

So this issue is mostly a reminder to me to do some work testing k3s-ansible on the 64-bit OS. I'm also tracking this internally for my Turing Pi cluster work, which uses a mix of different Pi versions (some which can't run Pi OS 64-bit), so it would be helpful to be able to make it work with all flavors for the foreseeable future.

@fnord123
Copy link

When I tested in a pre-release version of their 64 bit OS a couple weeks back I found ansible needs to to test for aarch64 to download the correct version of k3s for 64 bit raspbian. See my pull https://github.com/rancher/k3s-ansible/pull/34/files#diff-a6257193d67fe18587001a0e1c080878 which (in addition to adding Ubuntu support) works on 64 bit raspbian.

I'll grab their latest 64 bit beta raspbian build and verify my pull works there as well.

From a commenter on on of my YouTube videos:

something I noticed is that using k3s-ansible on the beta 64bit OS didn't work, it was missing the k3s binary and can't find it .. did you faced the same? and if so how did you fixed that

(see comment).

I've been slowly working through testing some of my own automation on the new 64-bit version of the Pi OS, and I've found that some images and binaries have to be downloaded differently based on the arch (which, in the past, I always assumed was armv7 or arm32 on Raspbian, which is not necessarily true as of yesterday).

So this issue is mostly a reminder to me to do some work testing k3s-ansible on the 64-bit OS. I'm also tracking this internally for my Turing Pi cluster work, which uses a mix of different Pi versions (some which can't run Pi OS 64-bit), so it would be helpful to be able to make it work with all flavors for the foreseeable future.

@fnord123
Copy link

fnord123 commented May 30, 2020

When I tested in a pre-release version of their 64 bit OS a couple weeks back I found ansible needs to to test for aarch64 to download the correct version of k3s for 64 bit raspbian. See my pull https://github.com/rancher/k3s-ansible/pull/34/files#diff-a6257193d67fe18587001a0e1c080878 which (in addition to adding Ubuntu support) works on 64 bit raspbian.

I'll grab their latest 64 bit beta raspbian build and verify my pull works there as well.

From a commenter on on of my YouTube videos:

something I noticed is that using k3s-ansible on the beta 64bit OS didn't work, it was missing the k3s binary and can't find it .. did you faced the same? and if so how did you fixed that

(see comment).
I've been slowly working through testing some of my own automation on the new 64-bit version of the Pi OS, and I've found that some images and binaries have to be downloaded differently based on the arch (which, in the past, I always assumed was armv7 or arm32 on Raspbian, which is not necessarily true as of yesterday).
So this issue is mostly a reminder to me to do some work testing k3s-ansible on the 64-bit OS. I'm also tracking this internally for my Turing Pi cluster work, which uses a mix of different Pi versions (some which can't run Pi OS 64-bit), so it would be helpful to be able to make it work with all flavors for the foreseeable future.

I've created pull request #37 which works for the latest Raspios 64 beta. It was also tested on the current Raspios 32 release.

@fnord123
Copy link

@geerlingguy can you grab the tip and verify this works for you, then close the issue if so? Thanks :)

@geerlingguy
Copy link
Contributor Author

@fnord123 - I will test this out later today.

@geerlingguy
Copy link
Contributor Author

I can confirm the entire playbook runs:

PLAY RECAP *************************************************************************************************************
10.0.100.102               : ok=14   changed=10   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   
10.0.100.131               : ok=14   changed=10   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   
10.0.100.141               : ok=25   changed=17   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   
10.0.100.83                : ok=14   changed=10   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0 

But then I checked the status of the cluster and it seems none of the nodes were connected, only the master was visible:

$ kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
raspberrypi   Ready    master   2m52s   v1.17.5+k3s1

@geerlingguy
Copy link
Contributor Author

geerlingguy commented Jun 2, 2020

Ah... that was due to:

Jun 02 03:51:02 raspberrypi k3s[925]: time="2020-06-02T03:51:02.799250309+01:00"
level=error msg="Node password rejected, duplicate hostname or contents of
'/etc/rancher/node/password' may not match server node-passwd entry,
try enabling a unique node name with the --with-node-id flag"

Since I'm using Pi OS it defaults to raspberrypi as the hostname ... for every Pi. I had to log into each of the nodes and run:

sudo hostnamectl set-hostname my-unique-hostname
sudo reboot

And then after the reboot, they started appearing in the cluster.

$ kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
raspberrypi         Ready    master   11m     v1.17.5+k3s1
worker-dramble-01   Ready    <none>   2m50s   v1.17.5+k3s1
worker-dramble-02   Ready    <none>   63s     v1.17.5+k3s1
worker-dramble-03   Ready    <none>   28s     v1.17.5+k3s1

Yay! Looks like this issue is good to go, thanks SO much to @fnord123 for the PR!

@geerlingguy
Copy link
Contributor Author

geerlingguy commented Jun 2, 2020

Closing as this is working great on the new OS (confirmed on my 2GB Pi 4s, but I'll likely also run some tests on some Compute Module 3+s which are the equivalent CPU of a 3 B+).

@fnord123
Copy link

fnord123 commented Jun 2, 2020

Ah... that was due to:

Jun 02 03:51:02 raspberrypi k3s[925]: time="2020-06-02T03:51:02.799250309+01:00"
level=error msg="Node password rejected, duplicate hostname or contents of
'/etc/rancher/node/password' may not match server node-passwd entry,
try enabling a unique node name with the --with-node-id flag"

Since I'm using Pi OS it defaults to raspberrypi as the hostname ... for every Pi. I had to log into each of the nodes and run:

sudo hostnamectl set-hostname my-unique-hostname
sudo reboot

Yeah I ran into this too and found it to be a bit of a hassle. Personally I would like it for the ansible script to do that work too :). Perhaps something like kmaster and knode1..knodeN. If I put something together, would a PR of this sort be interesting to this project?

Yay! Looks like this issue is good to go, thanks SO much to @fnord123 for the PR!

You are welcome :)

@geerlingguy
Copy link
Contributor Author

The hard thing is there are dozens of different ways hostnames are managed, whether via cloud-init, via a cloud provider's internal tooling, via a particular distribution's hostname management utilities, etc.

I've tried in the past to write some universal automation for it and gave up. Instead, I always put in the docs something like "make sure each server has a unique hostname" and let them figure it out :D

@tuxpeople
Copy link

@geerlingguy just FYI I use this in my home lab:

- name: BASE_OS | SETTINGS | Ensure hostname set
  hostname: name={{ inventory_hostname }}
  notify: 'BASE_OS | Reboot'

- name: BASE_OS | SETTINGS | Ensure hostname is in /etc/hosts
  lineinfile:
    dest=/etc/hosts
    regexp="^{{ ansible_default_ipv4.address }}.+$"
    line="{{ ansible_default_ipv4.address }} {{ ansible_fqdn }} {{ ansible_hostname }}"

Not sure what the hostname module will do with cloud-init :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants