Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K3s Server fails to run on fresh Fedora 33 install #2797

Closed
darkdatter opened this issue Jan 10, 2021 · 18 comments
Closed

K3s Server fails to run on fresh Fedora 33 install #2797

darkdatter opened this issue Jan 10, 2021 · 18 comments

Comments

@darkdatter
Copy link

Environmental Info:
Fedora 33 (workstation)
K3s version 1.20.0+k3s2 (2ae6b163)

Describe the bug:
k3s service endlessly flaps

Steps To Reproduce:

  • Startup Fedora 33
  • Install selinux policies etc. (per rancher docs)
  • Install k3s (sudo curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -)

Expected behavior:
The service would start and the master node would be available

Actual behavior:
The k3s server starts/stops endlessly until the user intervenes

Logs:
These are seen initially once the service dies for the first time:

msg="Cluster-Http-Server 2021/01/09 23:22:32 http: TLS handshake error from 127.0.0.1:50154: remote error: tls: bad certificate"
msg="Cluster-Http-Server 2021/01/09 23:22:32 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate"
msg="runtime core not ready"
msg="Failed to retrieve agent config: https://127.0.0.1:6443/v1-k3s/serving-kubelet.crt: 500 Internal Server Error"

After these messages, many general Go errors are visible.

@i5Js
Copy link

i5Js commented Jan 10, 2021

Hi, try with 1.19 version, I'm having issues with 1.20 as well, same issues.

@darkdatter
Copy link
Author

darkdatter commented Jan 12, 2021

@i5Js After attempting to install v1.19.5+k3s2, I now get a cgroups related error:

Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline

Perhaps the same as: https://github.com/k3s-io/k3s/issues/900, but trying this did not help:

sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

@brandond
Copy link
Member

brandond commented Jan 12, 2021

You didn't include any OS or platform info; is this on arm?

@darkdatter
Copy link
Author

darkdatter commented Jan 12, 2021

@brandond Apologies, no...

Arch: x86 Intel
Kernel:uname -r 5.9.16-200.fc33.x86_64

@brandond
Copy link
Member

brandond commented Jan 12, 2021

Can you attach complete logs? journalctl --no-pager -u k3s &> k3s.log

@darkdatter
Copy link
Author

@brandond It is just endless lines of Go output, which I don't think provides much debugging value to you.

Jan 09 23:51:31 localhost.localdomain k3s[48291]: github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl.func1(0x4d56da0, 0xc000571a80, 0xc000dfc000, 0xc000153ec0, 0xc002095a60) Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:306 +0xea Jan 09 23:51:31 localhost.localdomain k3s[48291]: created by github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:305 +0xc5 Jan 09 23:51:31 localhost.localdomain k3s[48291]: goroutine 3671 [select]: Jan 09 23:51:31 localhost.localdomain k3s[48291]: github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl.func1(0x4d56da0, 0xc000571a80, 0xc000dfc000, 0xc000153ec0, 0xc002095ac0) Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:306 +0xea Jan 09 23:51:31 localhost.localdomain k3s[48291]: created by github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:305 +0xc5 Jan 09 23:51:31 localhost.localdomain k3s[48291]: goroutine 3672 [select]: Jan 09 23:51:31 localhost.localdomain k3s[48291]: github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl.func1(0x4d56da0, 0xc000571a80, 0xc000dfc000, 0xc000153ec0, 0xc002095b20) Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:306 +0xea Jan 09 23:51:31 localhost.localdomain k3s[48291]: created by github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:305 +0xc5 Jan 09 23:51:31 localhost.localdomain k3s[48291]: goroutine 3673 [select]: Jan 09 23:51:31 localhost.localdomain k3s[48291]: github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl.func1(0x4d56da0, 0xc000571a80, 0xc000dfc000, 0xc000153ec0, 0xc002095b80) Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:306 +0xea Jan 09 23:51:31 localhost.localdomain k3s[48291]: created by github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:305 +0xc5 Jan 09 23:51:31 localhost.localdomain k3s[48291]: goroutine 3674 [select]: Jan 09 23:51:31 localhost.localdomain k3s[48291]: github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl.func1(0x4d56da0, 0xc000571a80, 0xc000dfc000, 0xc000153ec0, 0xc002095d60) Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:306 +0xea Jan 09 23:51:31 localhost.localdomain k3s[48291]: created by github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:305 +0xc5 Jan 09 23:51:31 localhost.localdomain k3s[48291]: goroutine 3675 [select]: Jan 09 23:51:31 localhost.localdomain k3s[48291]: github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured.(*LogStructured).ttl.func1(0x4d56da0, 0xc000571a80, 0xc000dfc000, 0xc000153ec0, 0xc002095dc0) Jan 09 23:51:31 localhost.localdomain k3s[48291]: /go/src/github.com/rancher/k3s/vendor/github.com/k3s-io/kine/pkg/logstructured/logstructured.go:306 +0xea

@rancher-max
Copy link
Contributor

This looks like a duplicate of #900

@darkdatter You can try the workarounds listed there to see if they help resolve the issue.

@darkdatter
Copy link
Author

@i5Js After attempting to install v1.19.5+k3s2, I now get a cgroups related error:

Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline

Perhaps the same as: https://github.com/k3s-io/k3s/issues/900, but trying this did not help:

sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

@rancher-max as I stated above, I tried the main workaround from #900 and it did not help with this issue.

@darkdatter
Copy link
Author

@i5Js @brandond @rancher-max

FYI: I scrubbed the k3s install, and tried v1.19.5+k3s1 release to no success, so then I went further and tried v1.18.3+k3s1 and it works! (again latest Fedora 33 as of the date of this issue submission)

@FilBot3
Copy link

FilBot3 commented Jan 16, 2021

I'm thinking that Fedora uses cgroups v2 and k8s and thus k3s, doesn't support this yet. I found this in a blog about k3s on Fedora 32.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   629  100   629    0     0   2022      0 --:--:-- --:--:-- --:--:--  2022
100 51.0M  100 51.0M    0     0  19.1M      0  0:00:02  0:00:02 --:--:-- 27.3M
➜  Downloads chmod u+x k3s 
➜  Downloads sudo ./k3s server                                                                    
INFO[2021-01-15T22:26:03.398501744-06:00] Starting k3s v1.19.7+k3s1 (5a00e38d)         
INFO[2021-01-15T22:26:03.398875241-06:00] Cluster bootstrap already complete           
INFO[2021-01-15T22:26:03.410646665-06:00] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s 
INFO[2021-01-15T22:26:03.410662109-06:00] Configuring database table schema and indexes, this may take a moment... 
INFO[2021-01-15T22:26:03.410772518-06:00] Database tables and indexes are up to date   
INFO[2021-01-15T22:26:03.411679541-06:00] Kine listening on unix://kine.sock           
INFO[2021-01-15T22:26:03.411921118-06:00] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key 
I0115 22:26:03.413229   10979 server.go:652] external host was not specified, using 192.168.1.23
I0115 22:26:03.413611   10979 server.go:177] Version: v1.19.7+k3s1
I0115 22:26:03.418082   10979 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0115 22:26:03.418095   10979 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0115 22:26:03.419150   10979 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0115 22:26:03.419163   10979 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0115 22:26:03.440962   10979 master.go:271] Using reconciler: lease
W0115 22:26:03.751642   10979 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
W0115 22:26:03.762958   10979 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0115 22:26:03.775524   10979 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0115 22:26:03.796346   10979 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0115 22:26:03.804579   10979 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0115 22:26:03.816845   10979 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0115 22:26:03.830505   10979 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W0115 22:26:03.830514   10979 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I0115 22:26:03.838132   10979 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0115 22:26:03.838160   10979 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
INFO[2021-01-15T22:26:03.847999149-06:00] Waiting for API server to become available   
INFO[2021-01-15T22:26:03.848048235-06:00] Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0 
I0115 22:26:03.848654   10979 registry.go:173] Registering SelectorSpread plugin
I0115 22:26:03.848672   10979 registry.go:173] Registering SelectorSpread plugin
INFO[2021-01-15T22:26:03.848989276-06:00] Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true 
INFO[2021-01-15T22:26:03.850411190-06:00] Node token is available at /var/lib/rancher/k3s/server/token 
INFO[2021-01-15T22:26:03.850432605-06:00] To join node to cluster: k3s agent -s https://192.168.1.23:6443 -t ${NODE_TOKEN} 
INFO[2021-01-15T22:26:03.851485849-06:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[2021-01-15T22:26:03.851504471-06:00] Run: k3s kubectl                             
WARN[2021-01-15T22:26:03.851604096-06:00] Failed to find cpuset cgroup, you may need to add "cgroup_enable=cpuset" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi) 
ERRO[2021-01-15T22:26:03.851640825-06:00] Failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi) 
FATA[2021-01-15T22:26:03.851652675-06:00] failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi) 
➜  Downloads cd
➜  ~ neofetch
          /:-------------:\          filbot@oryx-fedora 
       :-------------------::        ------------------ 
     :-----------/shhOHbmp---:\      OS: Fedora 33 (Workstation Edition) x86_64 
   /-----------omMMMNNNMMD  ---:     Host: Oryx Pro oryp6 
  :-----------sMMMMNMNMP.    ---:    Kernel: 5.10.6-200.fc33.x86_64 
 :-----------:MMMdP-------    ---\   Uptime: 2 hours, 31 mins 
,------------:MMMd--------    ---:   Packages: 2215 (rpm), 58 (flatpak) 
:------------:MMMd-------    .---:   Shell: zsh 5.8 
:----    oNMMMMMMMMMNho     .----:   Resolution: 1920x1080 
:--     .+shhhMMMmhhy++   .------/   DE: GNOME 3.38.2 
:-    -------:MMMd--------------:    WM: Mutter 
:-   --------/MMMd-------------;     WM Theme: Adwaita 
:-    ------/hMMMy------------:      Theme: Adwaita [GTK2/3] 
:-- :dMNdhhdNMMNo------------;       Icons: Adwaita [GTK2/3] 
:---:sdNMMMMNds:------------:        Terminal: gnome-terminal 
:------:://:-------------::          CPU: Intel i7-10875H (16) @ 5.100GHz 
:---------------------://            GPU: Intel UHD Graphics 
                                     GPU: NVIDIA GeForce RTX 2060 Mobile 
                                     Memory: 5102MiB / 31977MiB 

➜  ~

I'm trying to change as little as I can about the system, but that may not be possible. Since k3d doesn't work with Podman, I may have to try using some other operating system or a virtual machine at which point, there are other options too.

@rancher-max
Copy link
Contributor

Cgroups v2 has been fixed in #2844 and I validated a fresh Fedora 33 setup from AWS. I wonder if this is working for you all now on commit f3c41b7650340bddfa44129c72e7f9fb79061b90 as well?

$ k get nodes -o wide
NAME        STATUS   ROLES                       AGE   VERSION                INTERNAL-IP     EXTERNAL-IP       OS-IMAGE                    KERNEL-VERSION           CONTAINER-RUNTIME
server1     Ready    control-plane,etcd,master   38m   v1.20.2+k3s-f3c41b76   <redacted>      <redacted>        Fedora 33 (Cloud Edition)   5.8.15-301.fc33.x86_64   containerd://1.4.3-k3s1
agent1      Ready    <none>                      29m   v1.20.2+k3s-f3c41b76   <redacted>      <redacted>        Fedora 33 (Cloud Edition)   5.8.15-301.fc33.x86_64   containerd://1.4.3-k3s1
server2     Ready    control-plane,etcd,master   34m   v1.20.2+k3s-f3c41b76   <redacted>      <redacted>        Fedora 33 (Cloud Edition)   5.8.15-301.fc33.x86_64   containerd://1.4.3-k3s1

@T0MASD
Copy link

T0MASD commented Feb 10, 2021

@rancher-max I was following https://www.rancher.co.jp/docs/k3s/latest/en/installation/ and tried running: curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.20.2+k3s-f3c41b76 sh - on a fresh fedora33 install, but I only get this:

[INFO]  Using v1.20.2+k3s-f3c41b76 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.20.2+k3s-f3c41b76/sha256sum-amd64.txt

Nothing happens after. Do I need to specify extra args?

edit:
https://github.com/k3s-io/k3s/releases/download/v1.20.2+k3s-f3c41b76/sha256sum-amd64.txt is returning 404

@brandond
Copy link
Member

brandond commented Feb 10, 2021

@T0MASD That is not a valid release version; it's a dev build. You can install builds from CI using the INSTALL_K3S_COMMIT variable, for example to install the current master build you can do:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --token=token" INSTALL_K3S_COMMIT=6e768c301e77738ffd69934363aa0479c4a516d6 bash -

@T0MASD
Copy link

T0MASD commented Feb 11, 2021

Just had a go on my fresh f33 install:

[tomas@study-pc-qa ~]$ sudo curl -sfL https://get.k3s.io | INSTALL_K3S_COMMIT=6e768c301e77738ffd69934363aa0479c4a516d6 bash -
[INFO]  Using commit 6e768c301e77738ffd69934363aa0479c4a516d6 as release
[INFO]  Downloading hash https://storage.googleapis.com/k3s-ci-builds/k3s-6e768c301e77738ffd69934363aa0479c4a516d6.sha256sum
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
[tomas@study-pc-qa ~]$ systemctl status k3s
● k3s.service - Lightweight Kubernetes
     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2021-02-11 05:42:28 UTC; 13s ago
       Docs: https://k3s.io
    Process: 36122 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 36124 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 36125 (k3s-server)
      Tasks: 24
     Memory: 569.7M
        CPU: 13.387s
     CGroup: /system.slice/k3s.service
             ├─36125 /usr/local/bin/k3s server
             └─36137 containerd

and

[tomas@study-pc-qa ~]$ sudo kubectl get nodes -o wide
NAME          STATUS   ROLES                  AGE   VERSION                INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                   KERNEL-VERSION            CONTAINER-RUNTIME
study-pc-qa   Ready    control-plane,master   70s   v1.20.2+k3s-6e768c30   192.168.122.125   <none>        Fedora 33 (Thirty Three)   5.10.13-200.fc33.x86_64   containerd://1.4.3-k3s3

didn't run any further tests other than what's above

@FilBot3
Copy link

FilBot3 commented Feb 14, 2021

I was able to install k3s with that commit, and it did seem to start as shown in your outputs. However, I am getting some output I'm not familiar with

➜  ~ k3s kubectl --kubeconfig=$HOME/.k3s/config.yaml version
WARN[2021-02-14T15:32:42.287200767-06:00] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions 
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2+k3s-6e768c30", GitCommit:"6e768c301e77738ffd69934363aa0479c4a516d6", GitTreeState:"clean", BuildDate:"2021-02-10T20:20:37Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2+k3s-6e768c30", GitCommit:"6e768c301e77738ffd69934363aa0479c4a516d6", GitTreeState:"clean", BuildDate:"2021-02-10T20:20:37Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

➜  ~ k3s kubectl --kubeconfig=$HOME/.k3s/config.yaml get pods -o wide
WARN[2021-02-14T15:34:19.573498774-06:00] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions 
No resources found in default namespace.

➜  ~ k3s kubectl --kubeconfig=$HOME/.k3s/config.yaml get nodes       
WARN[2021-02-14T15:34:25.920349979-06:00] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions 
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

➜  ~ sudo ss -alpn | grep -i 6443
tcp   LISTEN 0      4096                                                                                     *:6443                   *:*      users:(("k3s-server",pid=24530,fd=7))                                                                                                                                     

➜  ~ sudo ss -alpn | grep k3s    
u_dgr UNCONN 0      0                                                                                   @000b8 229274                 * 0      users:(("k3s-server",pid=24672,fd=6))                                                                                                                                     
u_str LISTEN 0      4096                                                                             kine.sock 231743                 * 0      users:(("k3s-server",pid=24672,fd=14))                                                                                                                                    
u_str LISTEN 0      4096                                             /run/k3s/containerd/containerd.sock.ttrpc 229674                 * 0      users:(("containerd",pid=24745,fd=12))                                                                                                                                    
u_str LISTEN 0      4096                                                   /run/k3s/containerd/containerd.sock 229675                 * 0      users:(("containerd",pid=24745,fd=14))                                                                                                                                    
tcp   LISTEN 0      4096                                                                             127.0.0.1:6444             0.0.0.0:*      users:(("k3s-server",pid=24672,fd=18))                                                                                                                                    
tcp   LISTEN 0      4096                                                                                     *:6443                   *:*      users:(("k3s-server",pid=24672,fd=7))                                                          

Then when I tried with just plain ol' kubectl

➜  ~ kubectl get pods --kubeconfig=$HOME/.k3s/config.yaml --all-namespaces
No resources found

➜  ~ kubectl get nodes --kubeconfig=$HOME/.k3s/config.yaml            
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
The .k3s/config.yaml
➜  ~ cat $HOME/.k3s/config.yaml 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: (redacted)
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: (redacted)
    client-key-data: (redacted)

@FilBot3
Copy link

FilBot3 commented Feb 27, 2021

Can confirm, v1.20.4+k3s is functioning as expected using containerd-shims on Fedora 33. I was able to run a few pods, albeit kubecolor messed up the interactive terminal. Then shutdown using the k3s-killall.sh script. SystemD unit still doesn't fully kill it.

@brandond
Copy link
Member

Glad it's working. Note that the systemd unit is not intended to kill the pods.

@dlouzan
Copy link

dlouzan commented Jun 18, 2021

This could also be closed, isn't it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants