Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Template does not work when using a single control-plane node #115

Closed
jp83 opened this issue Oct 23, 2021 · 6 comments
Closed

Template does not work when using a single control-plane node #115

jp83 opened this issue Oct 23, 2021 · 6 comments

Comments

@jp83
Copy link

jp83 commented Oct 23, 2021

Details

What steps did you take and what happened:

I'm getting back to rebuilding my cluster and thought I'd try working through this template and adopt the k8s-at-home opinions. I'll keep debugging on my own but thought I'd document this as I believe I followed the readme verbatim and encountered this issue. I setup 2 fresh ubuntu server node VMs and defined them as _0 and _1 ansible hosts in the .config.env with only the first being control node = True.

TASK [xanmanning.k3s : Check the conditions when a single controller is defined] ***************************************************************************************************************************************************
fatal: [k8s-0]: FAILED! => changed=false
assertion: (k3s_etcd_datastore is not defined or not k3s_etcd_datastore)
evaluated_to: false
msg: Control plane configuration is invalid. Please see notes about k3s_control_node and HA in README.md.
skipping: [k8s-1]

What did you expect to happen:

I hoped to install k3s and bootstrap the cluster moving on to the next steps.

Anything else you would like to add:

2 other minor onboarding points to mention for ease of use:

This warning several times when running ./configure.sh
[PGP] WARN[0000] Deprecation Warning: GPG key fetching from a keyserver within sops will be removed in a future version of sops. See getsops/sops#727 for more information.

I had already defined my own hostnames I wanted on the nodes and noticed that it automatically changed them to hardcoded k8s-0,1,.... I was going to change them back in the inventory but noticed they were also used elsewhere like in the sops yml (provision/ansible/inventory/host_vars/k8s-0.sops.yml).

Additional Information:

@jp83
Copy link
Author

jp83 commented Oct 23, 2021

The checks seem to be enforcing > 3 control plane nodes with internal etcd. This makes sense but saw that you had just a master and worker node in your example. I found I can move on by setting k3s_use_unsupported_config: true in ./provision/ansible/inventory/group_vars/kubernetes/k3s.yml. Hopefully I can still figure out how to expand it later and get back to a supported config. Maybe this documentation will help somebody else searching, you can close if you feel this is sufficient. Thanks.

@jp83
Copy link
Author

jp83 commented Oct 23, 2021

Something still isn't right, the 2nd node isn't showing ROLE as worker. I tried the k3s-nuke playbook and reinstall since I had the earlier problem.

$ kubectl --kubeconfig=./provision/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
k8s-0 Ready control-plane,etcd,master 117s v1.21.5+k3s1
k8s-1 Ready 65s v1.21.5+k3s1

@onedr0p
Copy link
Owner

onedr0p commented Oct 23, 2021

Workers aren't given a role by default. You need to add roles to the nodes manually with kubectl.

@jp83
Copy link
Author

jp83 commented Oct 25, 2021

Just to further document my initial experience with using the template from scratch...

missing worker role (above comment) is not an issue, no role seems to default to worker (read that somewhere too).

i mentioned on discord (which for anyone trying to get onboard has much more active and appropriate support than simply submitting issues) that my next problem was...

flux-system core False kustomize build failed: accumulating resources: accumulation err='accumulating resources from 'system-upgrade': read /tmp/core283791273/cluster/core/system-upgrade: is a directory': recursed accumulation of path '/tmp/core283791273/cluster/core/system-upgrade': accumulating resources: accumulation err='accumulating resources from 'github.com/rancher/system-upgrade-controller': open /tmp/core283791273/cluster/core/system-upgrade/github.com/rancher/system-upgrade-controller: no such file or directory': git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128 False

as suggested it did turn out to be a non-obvious DNS issue related to my nodes getting statically assigned IPs from DHCP that includes domain and search. doing exec nslookup from within pod made it seem like it was initially working. after manually setting IPs on nodes with just network, gateway, and public nameservers it was able to continue deploying from repo.

and finally I added namespace: default to config-pvc.yaml for hajimari as well to get it to deploy after cross-referencing https://github.com/onedr0p/home-cluster/blob/7a28da949a1962f8ad515a91cdd00c86671c208f/cluster/apps/home/hajimari/config-pvc.yaml#L6

sorry for overloading the initial issue, but hopefully documenting my overall experience helps others. as i gain more confidence and figure out the proper course of action i'll try to help contribute.

edmundmiller added a commit to edmundmiller/home-ops that referenced this issue Nov 16, 2021
@parsec
Copy link

parsec commented Jan 25, 2022

Thanks for this! I had the exact same problem. I wasn't expecting the setup to want all three nodes to be control plane nodes. With k3s I don't think there's really any reason not to do it that way, but I wasn't entirely sure.

@onedr0p onedr0p changed the title Check the conditions when a single controller is defined Template does not work when using a single control-plane node Feb 3, 2022
@onedr0p
Copy link
Owner

onedr0p commented Feb 3, 2022

Docs added in 85a26f6

@onedr0p onedr0p pinned this issue Feb 3, 2022
@onedr0p onedr0p closed this as completed Mar 19, 2022
@onedr0p onedr0p unpinned this issue Mar 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants