Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow specifying containerd snapshotter (and auto-detect the best snapshotter) #924

Closed
AkihiroSuda opened this issue Oct 19, 2019 · 8 comments
Labels
kind/enhancement An improvement to existing functionality
Milestone

Comments

@AkihiroSuda
Copy link
Contributor

AkihiroSuda commented Oct 19, 2019

Is your feature request related to a problem? Please describe.

Ubuntu and Debian (since 10.0) supports mounting overlayfs in user namespaces.

This allows k3s Rootless-mode to use containerd overlay snapshotter.

However, containerd v1.3.0 started to call mknod 0 0 to create overlay whiteout files which is even not supported by Ubuntu and Debian: containerd/containerd#3762

So k3s rootless-mode cannot run some containers including helm which is required for deploying traefik:

E1019 19:24:27.024977   21918 remote_runtime.go:200] CreateContainer in sandbox "941524a9dca443253b954cc64183131f5cc89054cd0dd70f4ab342671cd1bf1f" from runtime service failed: rpc error: code = Unknown desc = failed to create containerd container: error unpacking image: failed to extract layer sha256:d635f458a6f8a4f3dd57a597591ab8977588a5a477e0a68027d18612a248906f: failed to convert whiteout file "etc/ca-certificates/.wh..wh..opq": operation not permitted: unknown
E1019 19:24:27.025143   21918 kuberuntime_manager.go:783] container start failed: 

Describe the solution you'd like

Although the issue is likely to be fixed in containerd v1.3.1 or v1.3.2 (containerd/containerd#3763), it would be good to allow k3s user to specify containerd native snapshotter as a workaround.

Also, k3s should detect the best snapshotter automatically.

@erikwilson
Copy link
Contributor

Thanks for the updates @AkihiroSuda, with those changes appears to be working well:

vagrant@k3s:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=19.04
DISTRIB_CODENAME=disco
DISTRIB_DESCRIPTION="Ubuntu 19.04"

vagrant@k3s:~$ sudo apt-get install uidmap
Reading package lists... Done
Building dependency tree
Reading state information... Done
uidmap is already the newest version (1:4.5-1.1ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 89 not upgraded.

vagrant@k3s:~$ cat /etc/subuid
vagrant:100000:65536

vagrant@k3s:~$ curl -sfL https://github.com/rancher/k3s/releases/download/v0.10.1-rc1/k3s -o k3s

vagrant@k3s:~$ chmod a+x k3s

vagrant@k3s:~$ rm -rf .rancher/

vagrant@k3s:~$ ./k3s server --rootless >k3s.log 2>&1 &
[1] 2183

vagrant@k3s:~$ export KUBECONFIG=~/.kube/k3s.yaml

vagrant@k3s:~$ ./k3s kubectl get all -A
NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/local-path-provisioner-58fb86bdfd-sl9cp   1/1     Running     0          5m58s
kube-system   pod/coredns-57d8bbb86-kq266                   1/1     Running     0          5m58s
kube-system   pod/helm-install-traefik-nlz7b                0/1     Completed   0          5m58s
kube-system   pod/svclb-traefik-m6mcl                       3/3     Running     0          5m40s
kube-system   pod/traefik-65bccdc4bd-ksl4n                  1/1     Running     0          5m40s

NAMESPACE     NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
default       service/kubernetes   ClusterIP      10.43.0.1       <none>        443/TCP                                     6m15s
kube-system   service/kube-dns     ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP                      6m14s
kube-system   service/traefik      LoadBalancer   10.43.177.137   127.0.0.1     80:32698/TCP,443:30443/TCP,8080:32326/TCP   5m40s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   1         1         1       1            1           <none>          5m40s

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           6m14s
kube-system   deployment.apps/coredns                  1/1     1            1           6m14s
kube-system   deployment.apps/traefik                  1/1     1            1           5m40s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-58fb86bdfd   1         1         1       5m58s
kube-system   replicaset.apps/coredns-57d8bbb86                   1         1         1       5m58s
kube-system   replicaset.apps/traefik-65bccdc4bd                  1         1         1       5m40s

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           18s        6m14s

^ cc @ShylajaDevadiga

@ShylajaDevadiga
Copy link
Contributor

With --rootless mode v0.10.0, enable ip fprwarding for the servicelb pods. echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
Closing issue.

@AkihiroSuda
Copy link
Contributor Author

@ShylajaDevadiga I don't think that relates to this issue. Could you reopen?

@erikwilson
Copy link
Contributor

Sorry about that, I had asked Shylaja to test & close if working (fix for helm pod errors is in v0.10.1).
Did you want to use this issue to track further improvements for auto-detecting the best snapshotter?

@AkihiroSuda
Copy link
Contributor Author

yes, also, manually specifying snapshotter should be also supported.
Specifying snapshotter is probably useful for rootful usecases, especially when the host rootfs is ZFS or btrfs.

@tx19980520
Copy link

I'm using nanoPC T4 and when i run k3s server & the log say that

E1107 13:49:43.967007   17226 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown
E1107 13:49:43.967191   17226 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66f496764-fwmwb_kube-system(60fe76f5-9fe9-4563-bf2d-2a35fadcdea1)" failed: rpc error: code = Unknown desc = failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown
E1107 13:49:43.967250   17226 kuberuntime_manager.go:692] createPodSandbox for pod "coredns-66f496764-fwmwb_kube-system(60fe76f5-9fe9-4563-bf2d-2a35fadcdea1)" failed: rpc error: code = Unknown desc = failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown
E1107 13:49:43.967496   17226 pod_workers.go:190] Error syncing pod 60fe76f5-9fe9-4563-bf2d-2a35fadcdea1 ("coredns-66f496764-fwmwb_kube-system(60fe76f5-9fe9-4563-bf2d-2a35fadcdea1)"), skipping: failed to "CreatePodSandbox" for "coredns-66f496764-fwmwb_kube-system(60fe76f5-9fe9-4563-bf2d-2a35fadcdea1)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-66f496764-fwmwb_kube-system(60fe76f5-9fe9-4563-bf2d-2a35fadcdea1)\" failed: rpc error: code = Unknown desc = failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/37/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown"

also when i use ctr i import image.tar, see that

unpacking k8s.gcr.io/pause:3.1 (sha256:8900fe5dc467fdf3fdc306993da6ede3049674958b0475c300e5d58f3d6b12af)...done
unpacking docker.io/coredns/coredns:1.6.3 (sha256:92b3ddeb27eb0a5b96dc0cfa69bf8b138afe8ca50cf38cd7f29a5bcaae6319a1)...INFO[2019-11-07T13:45:47.305558979Z] apply failure, attempting cleanup             error="failed to extract layer sha256:8b229224d9bee906cbe95253f4643e2e9f49ba3e381e87e47f4e73f63b27ec5d: failed to mount /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount027581958: invalid argument: unknown" key="extract-275603268-20bp sha256:a46a9d6a4fbf14955972688ac106db0f5e1dc94e1dd002043082e1c2e5d0c739"
ctr: failed to extract layer sha256:8b229224d9bee906cbe95253f4643e2e9f49ba3e381e87e47f4e73f63b27ec5d: failed to mount /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount027581958: invalid argument: unknown

when I use k3s ctr images import k3s-airgap-images-arm64.tar --snapshotter=native import image is working, but server is still error.

what can I do to specifying snapshotter now or maybe we should use --docker?

@davidnuzik davidnuzik added the kind/enhancement An improvement to existing functionality label Nov 7, 2019
@davidnuzik davidnuzik added this to the Backlog milestone Nov 7, 2019
@vtolstov
Copy link

i have such problem too

@AkihiroSuda
Copy link
Contributor Author

The current master branch supports --snapshotter=(overlayfs|fuse-overlayfs|native)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement An improvement to existing functionality
Projects
None yet
Development

No branches or pull requests

7 participants