New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CNI plugin fails to start in LXD cluster #55151
Comments
/sig cluster-ops |
Probably this will make no difference, but here is the lxc profile used by Canonical's Kubernetes for lxd deployments: https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/00_process-providertype/lxd-profile.yaml |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I deployed a kubernetes 1.8.2 cluster manually within a couple of Ubuntu 16.04 LXD containers (kubemaster1 and kubeworker1), following the procedure described in this guide. I had to do a few tweaks to default configurations in order to bring it up:
Docker version, 1.17.06 or above from 'test' apt repo (1.17.03 from 'stable' does not work due to this issue )
Kubelet service configuration, I had to add a couple of arguments:
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --fail-swap-on=false –cgroup-driver=cgroupfs"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
(fail-swap-on is required since even if the containers are set-up to not use swap, swap-related paths still show up in containers' file systems and kubelet stat's those paths to detect swap usage)
With those settings I can successfully run 'kubeadm init --skip-preflight-checks --pod-network-cidr=' in kubemaster1 container to initialize the master, after which the kubelet service starts and I can use the kubectl tool to interact with the master.
Now when I try to install a network plugin (e.g. flannel), all network-related pods fail to start (kube-flannel, kube-proxy, kube-dns). The reason is they are unable to mount the default ServiceAccount token, as seen in pod logs obtained with 'kubectl describe':
Indeed, if I try the 'mount -t' command at kubemaster1 shell prompt it fails with the same error message. It seems that's the proper behavior for a container, even a privileged one, since mounting file systems is a very dangerous operation that only the host's root (UID 0) should be able to perform.
I can disable default ServiceAccount token mounts in the respective DaemonSets' configurations, but then pods won't start since they don't find the token files in the mount path the code expects them to be.
What you expected to happen:
CNI-related pods start and pod networking becomes operative. This must indeed be possible, since e.g. Canonical's Juju is supposed to be able to deploy kubernetes to a cluster of LXD containers (though it is not in my environment).
The kubelet should be able to find out it is running inside an LXD container (or it should be possible to tell it through a command-line argument or configuration parameter) so that it does not use 'mount -t' to mount ServiceAccount tokens in that environment (I tried with the '--containerized' argument and kubelet won't even start inside the LXD container).
How to reproduce it (as minimally and precisely as possible):
Create two LXD containers (e.g. kubemaster1 and kubeworker1) then follow the steps described above to install the kubernetes cluster and the networking plugin in kubemaster1.
Anything else we need to know?:
A seemingly related problem was detected and commented in this issue.
Environment:
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration:
HP ZBook 15 portable workstation
16GB RAM
512GB HDD
Intel Core i7 vPro chipset
OS (e.g. from /etc/os-release):
Host:
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
LXD container:
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
uname -a
):Host:
Linux elx74401d27 4.4.0-97-generic Add some documentation #120-Ubuntu SMP Tue Sep 19 17:28:18 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
LXD container:
Linux kubemaster1 4.4.0-97-generic #120-Ubuntu SMP Tue Sep 19 17:28:18 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Manual install following https://kubernetes.io/docs/setup/independent/install-kubeadm/
Others:
Docker version inside LXD containers:
Client:
Version: 17.10.0-ce
API version: 1.33
Go version: go1.8.3
Git commit: f4ffd25
Built: Tue Oct 17 19:04:16 2017
OS/Arch: linux/amd64
Server:
Version: 17.10.0-ce
API version: 1.33 (minimum version 1.12)
Go version: go1.8.3
Git commit: f4ffd25
Built: Tue Oct 17 19:02:56 2017
OS/Arch: linux/amd64
Experimental: false
The text was updated successfully, but these errors were encountered: