Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run kind node in k8s pod on CoreOS #2646

Open
tim-ebert opened this issue Dec 18, 2019 · 6 comments
Open

Unable to run kind node in k8s pod on CoreOS #2646

tim-ebert opened this issue Dec 18, 2019 · 6 comments

Comments

@tim-ebert
Copy link

@tim-ebert tim-ebert commented Dec 18, 2019

Issue Report

Bug

I am currently trying to run a kind node inside a Kubernetes Pod running on a CoreOS Node.
The Pod manifest looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: kind-worker-1
  namespace: kind
  labels:
    app: kind-node
    node: worker-1
spec:
  containers:
  - env:
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: container
      value: docker
    image: kindest/node:v1.16.3
    name: kind-node
    resources:
      limits:
        cpu: "2"
        memory: 8Gi
      requests:
        cpu: "2"
        memory: 8Gi
    securityContext:
      privileged: true
    stdin: true
    volumeMounts:
    - mountPath: /lib/modules
      name: modules
      readOnly: true
    - mountPath: /sys/fs/cgroup
      name: cgroup
    - mountPath: /var/lib/docker
      name: dind-storage
  volumes:
  - emptyDir: {}
    name: dind-storage
  - hostPath:
      path: /lib/modules
      type: Directory
    name: modules
  - hostPath:
      path: /sys/fs/cgroup
      type: Directory
    name: cgroup

I am able to bootstrap the node, join the cluster and so on...
But about every 50 seconds, the systemd services for kubelet and containerd inside the Pod are restarted after messages from the kubelet looking like this:

Dec 17 12:14:07 kind-worker-1 kubelet[2903]: I1217 12:14:07.676823    2903 pod_container_manager_linux.go:166] Attempt to kill process with pid: 1
Dec 17 12:14:07 kind-worker-1 kubelet[2903]: I1217 12:14:07.676850    2903 pod_container_manager_linux.go:166] Attempt to kill process with pid: 2896
Dec 17 12:14:07 kind-worker-1 kubelet[2903]: I1217 12:14:07.676861    2903 pod_container_manager_linux.go:166] Attempt to kill process with pid: 2903
Dec 17 12:14:08 kind-worker-1 systemd[1]: kubelet.service: Service RestartSec=1s expired, scheduling restart.
Dec 17 12:14:08 kind-worker-1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 70.
Dec 17 12:14:08 kind-worker-1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 17 12:14:08 kind-worker-1 systemd[1]: Started kubelet: The Kubernetes Node Agent.

Therefore the Node can't stay ready and can't run any Pods.

Container Linux Version

$ cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=2135.6.0
VERSION_ID=2135.6.0
BUILD_ID=2019-07-30-0722
PRETTY_NAME="Container Linux by CoreOS 2135.6.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"

Environment

What hardware/cloud provider/hypervisor is being used to run Container Linux?
aws

Expected Behavior

systemd services should keep running inside a docker container.

Actual Behavior

systemd services are killed about every 50s.

Reproduction Steps

  1. run a Pod with a kindest/node image on a CoreOS Node
  2. try to bootstrap a kind cluster

Other Information

I already read through the issues over at kind (mainly kubernetes-sigs/kind#303 and kubernetes-sigs/kind#890) and tried all the suggestions, but none of them seem to fix the problem on CoreOS.

/cc @BenTheElder
/cc @aojea

@lucab

This comment has been minimized.

Copy link
Member

@lucab lucab commented Dec 18, 2019

Thanks for the report.

I'm a bit confused here as I don't think we ship any kubelet.service with Container Linux.
Where is that unit running, on the host or within a pod? And how does it look like?

@tim-ebert

This comment has been minimized.

Copy link
Author

@tim-ebert tim-ebert commented Dec 18, 2019

Sorry for the confusion.
So the kubelet service is actually running inside the kindest/node container after joining the cluster via kubeadm.

The unit is generated by kubeadm and looks something like this:

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

This and also the containerd service inside the kind container are killed/crashing every 50s and I am not sure what to look for here...

@aojea

This comment has been minimized.

Copy link

@aojea aojea commented Dec 18, 2019

hi @lucab (kind developer here), nice to meet you again :)
Kind is running Kubernetes in Docker, in this case Tim is trying to run Kubernetes in Docker in Kubernetes and I was wondering if CoreOS has some security or filesystems restrictions that can be causing this behavior.

@tim-ebert

This comment has been minimized.

Copy link
Author

@tim-ebert tim-ebert commented Dec 18, 2019

Thanks, for clarifying @aojea

@lucab

This comment has been minimized.

Copy link
Member

@lucab lucab commented Dec 18, 2019

So, if I understood correctly, there is a dedicated systemd session running in a pod, and the kubelet.service running below it.

I would probably have a look at journalctl and systemctl status for those units, which should tell you at least why each unit is restarting.
Other than that, the logs in the original ticket show something on a killing spree trying to kill a bunch of things, including pid-1, which does not seem a very healthy thing to do.

@tim-ebert

This comment has been minimized.

Copy link
Author

@tim-ebert tim-ebert commented Dec 18, 2019

Yes, exactly. That is the problem, which I am currently stuck at.

I attached some journal logs from the kubelet and containerd services shortly before they are killed.
containerd.log
kubelet.log

Here is some output from systemctl status shortly before and after the services are killed:

root@kind-worker-1:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/kind/systemd/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-12-18 11:52:58 UTC; 21s ago
     Docs: http://kubernetes.io/docs/
 Main PID: 14026 (kubelet)
    Tasks: 18 (limit: 4915)
   Memory: 33.0M
   CGroup: /kubepods/podfb87cd2a-bc2d-46ac-b5c0-e1673b21a438/cce19d47d2587cc2c3996a0d1042d70ef25434abe454edd5a269e98edaae670d/system.slice/kubelet.service
           └─14026 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --fail-swap-
on=false --node-ip=10.241.132.21 --v=7 --fail-swap-on=false

Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839886   14026 helpers.go:781] eviction manager: observations: signal=imagefs.available, available: 42092580Ki, capacity: 48375392Ki, time: 2019-12-18 11:53:18.137623173 +0000 UTC
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839911   14026 helpers.go:781] eviction manager: observations: signal=imagefs.inodesFree, available: 12371658, capacity: 12444032, time: 2019-12-18 11:53:18.137623173 +0000 UTC
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839922   14026 helpers.go:781] eviction manager: observations: signal=pid.available, available: 31943, capacity: 32Ki, time: 2019-12-18 11:53:18.838649371 +0000 UTC m=+20.823194956
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839932   14026 helpers.go:781] eviction manager: observations: signal=memory.available, available: 13818732Ki, capacity: 15950572Ki, time: 2019-12-18 11:53:18.809029308 +0000 UTC m=+20.793574907
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839942   14026 helpers.go:781] eviction manager: observations: signal=allocatableMemory.available, available: 15151388Ki, capacity: 15950572Ki, time: 2019-12-18 11:53:18.839809486 +0000 UTC m=+20.824355082
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839951   14026 helpers.go:781] eviction manager: observations: signal=nodefs.available, available: 42092580Ki, capacity: 48375392Ki, time: 2019-12-18 11:53:18.809029308 +0000 UTC m=+20.793574907
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839961   14026 helpers.go:781] eviction manager: observations: signal=nodefs.inodesFree, available: 12371658, capacity: 12444032, time: 2019-12-18 11:53:18.809029308 +0000 UTC m=+20.793574907
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.839984   14026 eviction_manager.go:320] eviction manager: no resources are starved
Dec 18 11:53:18 kind-worker-1 kubelet[14026]: I1218 11:53:18.909515   14026 config.go:100] Looking for [api file], have seen map[]
Dec 18 11:53:19 kind-worker-1 kubelet[14026]: I1218 11:53:19.009526   14026 config.go:100] Looking for [api file], have seen map[]
root@kind-worker-1:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/kind/systemd/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: signal) since Wed 2019-12-18 11:53:20 UTC; 66ms ago
     Docs: http://kubernetes.io/docs/
  Process: 14026 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=killed, signal=KILL)
 Main PID: 14026 (code=killed, signal=KILL)
root@kind-worker-1:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/kind/systemd/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-12-18 11:53:22 UTC; 80ms ago
     Docs: http://kubernetes.io/docs/
 Main PID: 14561 (kubelet)
    Tasks: 12 (limit: 4915)
   Memory: 15.0M
   CGroup: /kubepods/podfb87cd2a-bc2d-46ac-b5c0-e1673b21a438/cce19d47d2587cc2c3996a0d1042d70ef25434abe454edd5a269e98edaae670d/system.slice/kubelet.service
           └─14561 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --fail-swap-
on=false --node-ip=10.241.132.21 --v=7 --fail-swap-on=false

Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089609   14561 config.go:412] Receiving a new pod "blackbox-exporter-855496dd6b-lw729_kube-system(2470f68d-1cb0-447a-8c20-d28d0a39b53a)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089665   14561 config.go:412] Receiving a new pod "coredns-7cbffb9d6b-678cz_kube-system(5349c653-4bd1-427e-9c6e-fbc93053b798)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089675   14561 config.go:412] Receiving a new pod "coredns-7cbffb9d6b-rrgr9_kube-system(ec31a47c-4b3d-4c70-939a-e82080211d12)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089685   14561 config.go:412] Receiving a new pod "metrics-server-7c8756b968-8mn6l_kube-system(df37faad-e7ed-463a-862b-595961f0f2a8)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089695   14561 config.go:412] Receiving a new pod "kube-proxy-6lzmh_kube-system(a7e622f1-ec41-4a9c-8ded-0d13785875a3)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089704   14561 config.go:412] Receiving a new pod "node-exporter-fpg7g_kube-system(3c7b8c0f-0ee0-41ad-a68f-b9f5a6c37cbf)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089714   14561 config.go:412] Receiving a new pod "vpn-shoot-6746d498ff-n59pt_kube-system(9ea15479-1211-4760-9927-2fa34d5071bd)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.089723   14561 config.go:412] Receiving a new pod "node-problem-detector-trj4n_kube-system(a1e0c1d4-89b2-411c-9d72-212574432069)"
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.090770   14561 round_trippers.go:446] Response Status: 200 OK in 1 milliseconds
Dec 18 11:53:22 kind-worker-1 kubelet[14561]: I1218 11:53:22.090812   14561 round_trippers.go:446] Response Status: 200 OK in 2 milliseconds
root@kind-worker-1:/# systemctl status containerd
● containerd.service - containerd container runtime
   Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-12-18 11:56:34 UTC; 46s ago
     Docs: https://containerd.io
  Process: 20885 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 20886 (containerd)
    Tasks: 105
   Memory: 102.0M
   CGroup: /kubepods/podfb87cd2a-bc2d-46ac-b5c0-e1673b21a438/cce19d47d2587cc2c3996a0d1042d70ef25434abe454edd5a269e98edaae670d/system.slice/containerd.service
           ├─20886 /usr/local/bin/containerd
           ├─21694 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id fb1d6d317be1643a4c86ad16e7ccdb5be807228729ab9cdc49c41866ab6be26e -address /run/containerd/containerd.sock
           ├─21991 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id d11d6c948efcd0e49090c3ecaade330d4d2c52ffb60322649e8da84bd408f15f -address /run/containerd/containerd.sock
           ├─22036 runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/d11d6c948efcd0e49090c3ecaade330d4d2c52ffb60322649e8da84bd408f15f/log.json --log-format json create --bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/d11d6c
948efcd0e49090c3ecaade330d4d2c52ffb60322649e8da84bd408f15f --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/d11d6c948efcd0e49090c3ecaade330d4d2c52ffb60322649e8da84bd408f15f/init.pid d11d6c948efcd0e49090c3ecaade330d4d2c52ffb60322649e8da84bd408f15f
           ├─22066 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id 3fdb961df304f58c4f9ae0ebd6613b9a3b4bf9f8b2eb7a659ecfd776a9bb7a02 -address /run/containerd/containerd.sock
           ├─22080 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id e1444dfdb2e3a399e44b737dc6d4c8b6e4519287ff4d16e56da683e3cb3383d5 -address /run/containerd/containerd.sock
           ├─22089 runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/3fdb961df304f58c4f9ae0ebd6613b9a3b4bf9f8b2eb7a659ecfd776a9bb7a02/log.json --log-format json create --bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/3fdb96
1df304f58c4f9ae0ebd6613b9a3b4bf9f8b2eb7a659ecfd776a9bb7a02 --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/3fdb961df304f58c4f9ae0ebd6613b9a3b4bf9f8b2eb7a659ecfd776a9bb7a02/init.pid 3fdb961df304f58c4f9ae0ebd6613b9a3b4bf9f8b2eb7a659ecfd776a9bb7a02
           ├─22107 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id ce06d36dddb80a4affeefe37f60cd844f392d5f75102e1e5fd4d378b9abda717 -address /run/containerd/containerd.sock
           ├─22154 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id 1a2a2b818d89db7fa2377e20df61b85d60d3fb09b4528d8b4411957b8a04b3f8 -address /run/containerd/containerd.sock
           └─22274 /usr/local/bin/containerd-shim-runc-v1 -namespace k8s.io -id 8ec35bb8944b47df5cabd8e69f57670ee193fa21490f30d6c2e88d315f527338 -address /run/containerd/containerd.sock

Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.527979778Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1444dfdb2e3a399e44b737dc6d4c8b6e4519287ff4d16e56da683e3cb3383d5 pid=22080
Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.555670665Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce06d36dddb80a4affeefe37f60cd844f392d5f75102e1e5fd4d378b9abda717 pid=22107
Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.584758374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2a2b818d89db7fa2377e20df61b85d60d3fb09b4528d8b4411957b8a04b3f8 pid=22154
Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.687899264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7cbffb9d6b-rrgr9,Uid:ec31a47c-4b3d-4c70-939a-e82080211d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce06d36ddd
b80a4affeefe37f60cd844f392d5f75102e1e5fd4d378b9abda717\""
Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.690334540Z" level=info msg="PullImage \"coredns/coredns:1.6.3\""
Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.698089495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-7c8756b968-8mn6l,Uid:df37faad-e7ed-463a-862b-595961f0f2a8,Namespace:kube-system,Attempt:3,} returns sandbox id \"1a2
a2b818d89db7fa2377e20df61b85d60d3fb09b4528d8b4411957b8a04b3f8\""
Dec 18 11:57:19 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:19.700981925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7cbffb9d6b-678cz,Uid:5349c653-4bd1-427e-9c6e-fbc93053b798,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1444dfdb2
e3a399e44b737dc6d4c8b6e4519287ff4d16e56da683e3cb3383d5\""
Dec 18 11:57:20 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:20.038419926Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:node-problem-detector-trj4n,Uid:a1e0c1d4-89b2-411c-9d72-212574432069,Namespace:kube-system,Attempt:0,}"
Dec 18 11:57:20 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:20.086019519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ec35bb8944b47df5cabd8e69f57670ee193fa21490f30d6c2e88d315f527338 pid=22274
Dec 18 11:57:20 kind-worker-1 containerd[20886]: time="2019-12-18T11:57:20.147494504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:node-problem-detector-trj4n,Uid:a1e0c1d4-89b2-411c-9d72-212574432069,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ec35bb
8944b47df5cabd8e69f57670ee193fa21490f30d6c2e88d315f527338\""
root@kind-worker-1:/# systemctl status containerd
● containerd.service - containerd container runtime
   Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: signal) since Wed 2019-12-18 11:57:20 UTC; 703ms ago
     Docs: https://containerd.io
  Process: 20885 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 20886 ExecStart=/usr/local/bin/containerd (code=killed, signal=KILL)
 Main PID: 20886 (code=killed, signal=KILL)
    Tasks: 0
   Memory: 4.1M
   CGroup: /kubepods/podfb87cd2a-bc2d-46ac-b5c0-e1673b21a438/cce19d47d2587cc2c3996a0d1042d70ef25434abe454edd5a269e98edaae670d/system.slice/containerd.service
root@kind-worker-1:/# systemctl status containerd
● containerd.service - containerd container runtime
   Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-12-18 11:57:22 UTC; 1s ago
     Docs: https://containerd.io
  Process: 22510 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 22516 (containerd)
    Tasks: 14
   Memory: 29.7M
   CGroup: /kubepods/podfb87cd2a-bc2d-46ac-b5c0-e1673b21a438/cce19d47d2587cc2c3996a0d1042d70ef25434abe454edd5a269e98edaae670d/system.slice/containerd.service
           └─22516 /usr/local/bin/containerd

Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.101516635Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.101961881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.102229859Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.102297986Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.102312737Z" level=info msg="containerd successfully booted in 0.054324s"
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.109845987Z" level=info msg="Start subscribing containerd event"
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.109929056Z" level=info msg="Start recovering state"
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.123202529Z" level=info msg="Start event monitor"
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.123240646Z" level=info msg="Start snapshots syncer"
Dec 18 11:57:22 kind-worker-1 containerd[22516]: time="2019-12-18T11:57:22.123250853Z" level=info msg="Start streaming server"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.