Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kata-with-k8s: Add cgroupDriver for containerd #4130

Merged
merged 1 commit into from Jun 28, 2022

Conversation

surajssd
Copy link
Contributor

This commit updates the "Run Kata Containers with Kubernetes" to include
cgroupDriver configuration via "KubeletConfiguration". Without this
setting kubeadm defaults to systemd cgroupDriver. Containerd with Kata
cannot spawn conntainers with systemd cgroup driver.

@surajssd surajssd requested a review from a team as a code owner April 21, 2022 13:19
@katacontainersbot katacontainersbot added the size/small Small and simple task label Apr 21, 2022
@amshinde
Copy link
Member

@surajssd What issues are you seeing with systemd cgroupdriver?

@surajssd
Copy link
Contributor Author

Here is a pod config I tried creating:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kata
spec:
  runtimeClassName: kata
  containers:
  - name: nginx
    image: nginx

Pod stays in ContainerCreating state:

$ kubectl get pods
NAME         READY   STATUS              RESTARTS   AGE
nginx-kata   0/1     ContainerCreating   0          2m48s

@amshinde These are the logs from kubelet which shows why the pod creation failed:

Apr 22 05:17:07 fedora kubelet[6890]: I0422 05:17:07.120489    6890 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccjm\" (UniqueName: \"kubernetes.io/projected/1643de63-a84a-4636-a561-416f4541537c-kube-api-access-jccjm\") pod \"nginx-kata\" (UID: \"1643de63-a84a-4636-a561-416f4541537c\") " pod="default/nginx-kata"
Apr 22 05:17:09 fedora kubelet[6890]: W0422 05:17:09.462761    6890 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1643de63_a84a_4636_a561_416f4541537c.slice/cri-containerd-3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b.scope WatchSource:0}: task 3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b not found: not found
Apr 22 05:17:21 fedora kubelet[6890]: E0422 05:17:21.775897    6890 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1643de63_a84a_4636_a561_416f4541537c.slice/cri-containerd-3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b.scope: task 3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b not found: not found
Apr 22 05:18:21 fedora kubelet[6890]: E0422 05:18:21.765015    6890 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1643de63_a84a_4636_a561_416f4541537c.slice/cri-containerd-3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b.scope: task 3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b not found: not found

The events related to the pod:

$ kubectl get events
...
3s   Normal    Scheduled                 pod/nginx-kata   Successfully assigned default/nginx-kata to fedora
0s   Warning   FailedCreatePodSandBox    pod/nginx-kata   Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
0s   Warning   FailedCreatePodSandBox    pod/nginx-kata   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to reserve sandbox name "nginx-kata_default_1643de63-a84a-4636-a561-416f4541537c_0": name "nginx-kata_default_1643de63-a84a-4636-a561-416f4541537c_0" is reserved for "3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b"
0s   Warning   FailedCreatePodSandBox    pod/nginx-kata   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to reserve sandbox name "nginx-kata_default_1643de63-a84a-4636-a561-416f4541537c_0": name "nginx-kata_default_1643de63-a84a-4636-a561-416f4541537c_0" is reserved for "3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b"

And these are the containerd logs:

Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.424181566Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nginx-kata,Uid:1643de63-a84a-4636-a561-416f4541537c,Namespace:default,Attempt:0,}"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.424277885Z" level=debug msg="Sandbox config &PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:nginx-kata,Uid:1643de63-a84a-4636-a561-416f4541537c,Namespace:default,Attempt:0,},Hostname:nginx-kata,LogDirectory:/var/log/pods/default_nginx-kata_1643de63-a84a-4636-a561-416f4541537c,DnsConfig:&DNSConfig{Servers:[10.96.0.10],Searches:[default.svc.cluster.local svc.cluster.local cluster.local ],Options:[ndots:5],},PortMappings:[]*PortMapping{},Labels:map[string]string{io.kubernetes.pod.name: nginx-kata,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1643de63-a84a-4636-a561-416f4541537c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"nginx-kata\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\"}],\"runtimeClassName\":\"kata\"}}\n,kubernetes.io/config.seen: 2022-04-22T05:17:07.057930061Z,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1643de63_a84a_4636_a561_416f4541537c.slice,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:runtime/default,RunAsGroup:nil,Seccomp:&SecurityProfile{ProfileType:RuntimeDefault,LocalhostRef:,},Apparmor:nil,},Sysctls:map[string]string{},},}"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.424318587Z" level=debug msg="Generated id \"3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b\" for sandbox \"nginx-kata_default_1643de63-a84a-4636-a561-416f4541537c_0\""
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.424400190Z" level=debug msg="Use OCI {Type:io.containerd.kata.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} for sandbox \"3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b\""
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.520 [INFO][20999] plugin.go 265: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {fedora-k8s-nginx--kata-eth0  default  1643de63-a84a-4636-a561-416f4541537c 34860 0 2022-04-22 05:17:06 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] []  []} {k8s  fedora  nginx-kata eth0 default [] []   [kns.default ksa.default.default] calib0680684ccf  []}} ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.520 [INFO][20999] k8s.go 73: Extracted identifiers for CmdAddK8s ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.579 [INFO][21014] ipam_plugin.go 226: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" HandleID="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Workload="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.591 [INFO][21014] ipam_plugin.go 266: Auto assigning IP ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" HandleID="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Workload="fedora-k8s-nginx--kata-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8670), Attrs:map[string]string{"namespace":"default", "node":"fedora", "pod":"nginx-kata", "timestamp":"2022-04-22 05:17:07.579807397 +0000 UTC"}, Hostname:"fedora", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:356"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:371"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.591 [INFO][21014] ipam.go 104: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'fedora'
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.593 [INFO][21014] ipam.go 657: Looking up existing affinities for host handle="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.600 [INFO][21014] ipam.go 369: Looking up existing affinities for host host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.605 [INFO][21014] ipam.go 486: Trying affinity for 192.168.124.192/26 host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.609 [INFO][21014] ipam.go 152: Attempting to load block cidr=192.168.124.192/26 host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.613 [INFO][21014] ipam.go 229: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.613 [INFO][21014] ipam.go 1177: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.615 [INFO][21014] ipam.go 1679: Creating new handle: k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.620 [INFO][21014] ipam.go 1200: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.633 [INFO][21014] ipam.go 1213: Successfully claimed IPs: [192.168.124.235/26] block=192.168.124.192/26 handle="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" host="fedora"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.633 [INFO][21014] ipam.go 844: Auto-assigned 1 out of 1 IPv4s: [192.168.124.235/26] handle="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" host="fedora"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:377"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.633 [INFO][21014] ipam_plugin.go 284: Calico CNI IPAM assigned addresses IPv4=[192.168.124.235/26] IPv6=[] ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" HandleID="k8s-pod-network.3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Workload="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.636 [INFO][20999] k8s.go 382: Populated endpoint ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"fedora-k8s-nginx--kata-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1643de63-a84a-4636-a561-416f4541537c", ResourceVersion:"34860", Generation:0, CreationTimestamp:time.Date(2022, time.April, 22, 5, 17, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"fedora", ContainerID:"", Pod:"nginx-kata", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.235/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calib0680684ccf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil)}}
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.636 [INFO][20999] k8s.go 383: Calico CNI using IPs: [192.168.124.235/32] ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.636 [INFO][20999] dataplane_linux.go 68: Setting the host side veth name to calib0680684ccf ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.640 [INFO][20999] dataplane_linux.go 453: Disabling IPv4 forwarding ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.655 [INFO][20999] k8s.go 410: Added Mac, interface name, and active container ID to endpoint ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"fedora-k8s-nginx--kata-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1643de63-a84a-4636-a561-416f4541537c", ResourceVersion:"34860", Generation:0, CreationTimestamp:time.Date(2022, time.April, 22, 5, 17, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"fedora", ContainerID:"3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b", Pod:"nginx-kata", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.235/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calib0680684ccf", MAC:"22:59:08:a5:d8:d5", Ports:[]v3.WorkloadEndpointPort(nil)}}
Apr 22 05:17:07 fedora containerd[712]: 2022-04-22 05:17:07.667 [INFO][20999] k8s.go 484: Wrote updated endpoint to datastore ContainerID="3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b" Namespace="default" Pod="nginx-kata" WorkloadEndpoint="fedora-k8s-nginx--kata-eth0"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.696999897Z" level=debug msg="cni result for sandbox \"3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b\": {\"Interfaces\":{\"calib0680684ccf\":{\"IPConfigs\":null,\"Mac\":\"\",\"Sandbox\":\"\"},\"eth0\":{\"IPConfigs\":[{\"IP\":\"192.168.124.235\",\"Gateway\":\"\"}],\"Mac\":\"\",\"Sandbox\":\"\"},\"lo\":{\"IPConfigs\":[{\"IP\":\"127.0.0.1\",\"Gateway\":\"\"},{\"IP\":\"::1\",\"Gateway\":\"\"}],\"Mac\":\"00:00:00:00:00:00\",\"Sandbox\":\"/var/run/netns/cni-313424f5-8548-5641-b4f1-f9827cb47bd0\"}},\"DNS\":[{},{}],\"Routes\":null}"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.697759378Z" level=debug msg="Sandbox container \"3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b\" spec: (*specs.Spec)(0xc001204700){Version:(string)1.0.2-dev Process:(*specs.Process)(0xc000182000){Terminal:(bool)false ConsoleSize:(*specs.Box)<nil> User:(specs.User){UID:(uint32)0 GID:(uint32)0 Umask:(*uint32)<nil> AdditionalGids:([]uint32)<nil> Username:(string)} Args:([]string)[/pause] CommandLine:(string) Env:([]string)[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Cwd:(string)/ Capabilities:(*specs.LinuxCapabilities)(0xc001204780){Bounding:([]string)[CAP_CHOWN CAP_DAC_OVERRIDE CAP_FSETID CAP_FOWNER CAP_MKNOD CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SETFCAP CAP_SETPCAP CAP_NET_BIND_SERVICE CAP_SYS_CHROOT CAP_KILL CAP_AUDIT_WRITE] Effective:([]string)[CAP_CHOWN CAP_DAC_OVERRIDE CAP_FSETID CAP_FOWNER CAP_MKNOD CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SETFCAP CAP_SETPCAP CAP_NET_BIND_SERVICE CAP_SYS_CHROOT CAP_KILL CAP_AUDIT_WRITE] Inheritable:([]string)<nil> Permitted:([]string)[CAP_CHOWN CAP_DAC_OVERRIDE CAP_FSETID CAP_FOWNER CAP_MKNOD CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SETFCAP CAP_SETPCAP CAP_NET_BIND_SERVICE CAP_SYS_CHROOT CAP_KILL CAP_AUDIT_WRITE] Ambient:([]string)<nil>} Rlimits:([]specs.POSIXRlimit)<nil> NoNewPrivileges:(bool)true ApparmorProfile:(string) OOMScoreAdj:(*int)(0xc0008e0ae8)-998 SelinuxLabel:(string)} Root:(*specs.Root)(0xc0007becd8){Path:(string)rootfs Readonly:(bool)true} Hostname:(string)nginx-kata Mounts:([]specs.Mount)[{Destination:(string)/proc Type:(string)proc Source:(string)proc Options:([]string)[nosuid noexec nodev]} {Destination:(string)/dev Type:(string)tmpfs Source:(string)tmpfs Options:([]string)[nosuid strictatime mode=755 size=65536k]} {Destination:(string)/dev/pts Type:(string)devpts Source:(string)devpts Options:([]string)[nosuid noexec newinstance ptmxmode=0666 mode=0620 gid=5]} {Destination:(string)/dev/shm Type:(string)tmpfs Source:(string)shm Options:([]string)[nosuid noexec nodev mode=1777 size=65536k]} {Destination:(string)/dev/mqueue Type:(string)mqueue Source:(string)mqueue Options:([]string)[nosuid noexec nodev]} {Destination:(string)/sys Type:(string)sysfs Source:(string)sysfs Options:([]string)[nosuid noexec nodev ro]} {Destination:(string)/dev/shm Type:(string)bind Source:(string)/run/containerd/io.containerd.grpc.v1.cri/sandboxes/3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b/shm Options:([]string)[rbind ro]} {Destination:(string)/etc/resolv.conf Type:(string)bind Source:(string)/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b/resolv.conf Options:([]string)[rbind ro]}] Hooks:(*specs.Hooks)<nil> Annotations:(map[string]string)map[io.kubernetes.cri.sandbox-log-directory:/var/log/pods/default_nginx-kata_1643de63-a84a-4636-a561-416f4541537c io.kubernetes.cri.container-type:sandbox io.kubernetes.cri.sandbox-id:3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b io.kubernetes.cri.sandbox-namespace:default io.kubernetes.cri.sandbox-name:nginx-kata] Linux:(*specs.Linux)(0xc0001820f0){UIDMappings:([]specs.LinuxIDMapping)<nil> GIDMappings:([]specs.LinuxIDMapping)<nil> Sysctl:(map[string]string)map[] Resources:(*specs.LinuxResources)(0xc0001747e0){Devices:([]specs.LinuxDeviceCgroup)[{Allow:(bool)false Type:(string) Major:(*int64)<nil> Minor:(*int64)<nil> Access:(string)rwm}] Memory:(*specs.LinuxMemory)<nil> CPU:(*specs.LinuxCPU)(0xc000b3e640){Shares:(*uint64)(0xc0008e0b50)2 Quota:(*int64)<nil> Period:(*uint64)<nil> RealtimeRuntime:(*int64)<nil> RealtimePeriod:(*uint64)<nil> Cpus:(string) Mems:(string)} Pids:(*specs.LinuxPids)<nil> BlockIO:(*specs.LinuxBlockIO)<nil> HugepageLimits:([]specs.LinuxHugepageLimit)<nil> Network:(*specs.LinuxNetwork)<nil> Rdma:(map[string]specs.LinuxRdma)<nil> Unified:(map[string]string)<nil>} CgroupsPath:(string)kubepods-besteffort-pod1643de63_a84a_4636_a561_416f4541537c.slice:cri-containerd:3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b Namespaces:([]specs.LinuxNamespace)[{Type:(specs.LinuxNamespaceType)pid Path:(string)} {Type:(specs.LinuxNamespaceType)ipc Path:(string)} {Type:(specs.LinuxNamespaceType)uts Path:(string)} {Type:(specs.LinuxNamespaceType)mount Path:(string)} {Type:(specs.LinuxNamespaceType)network Path:(string)/var/run/netns/cni-313424f5-8548-5641-b4f1-f9827cb47bd0}] Devices:([]specs.LinuxDevice)<nil> Seccomp:(*specs.LinuxSeccomp)<nil> RootfsPropagation:(string) MaskedPaths:([]string)[/proc/acpi /proc/asound /proc/kcore /proc/keys /proc/latency_stats /proc/timer_list /proc/timer_stats /proc/sched_debug /sys/firmware /proc/scsi] ReadonlyPaths:([]string)[/proc/bus /proc/fs /proc/irq /proc/sys /proc/sysrq-trigger] MountLabel:(string) IntelRdt:(*specs.LinuxIntelRdt)<nil> Personality:(*specs.LinuxPersonality)<nil>} Solaris:(*specs.Solaris)<nil> Windows:(*specs.Windows)<nil> VM:(*specs.VM)<nil>}"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.751997623Z" level=debug msg="event published" ns=k8s.io topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.766217196Z" level=debug msg="event published" ns=k8s.io topic=/containers/create type=containerd.events.ContainerCreate
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.931995562Z" level=debug msg="registering ttrpc server"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.932186227Z" level=debug msg="serving api on socket" socket="[inherited from parent]"
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.932251667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b pid=21060
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.932885232Z" level=debug msg="Create() start" container=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=containerd-kata-shim-v2
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.933022690Z" level=debug msg="converting /run/containerd/io.containerd.runtime.v2.task/k8s.io/3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b/config.json" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=compatoci
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.936888662Z" level=info msg="loaded configuration" file=/usr/share/kata-containers/defaults/configuration.toml format=TOML name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=katautils
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.937221895Z" level=info msg="IOMMUPlatform is disabled by default." name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=katautils
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.936888662Z" level=info msg="loaded configuration" file=/usr/share/kata-containers/defaults/configuration.toml format=TOML name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=katautils
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.937221895Z" level=info msg="IOMMUPlatform is disabled by default." name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=katautils
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.937904865Z" level=info msg="shm-size detected: 67108864" source=virtcontainers subsystem=oci
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.937904865Z" level=info msg="shm-size detected: 67108864" source=virtcontainers subsystem=oci
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.939828711Z" level=info msg="adding volume" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=qemu volume-type=virtio-fs
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.939828711Z" level=info msg="adding volume" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=qemu volume-type=virtio-fs
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.941193872Z" level=info msg="veth interface found" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.941287938Z" level=info msg="Endpoints found after scan" endpoints="[0xc0000c4900]" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.941193872Z" level=info msg="veth interface found" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.941287938Z" level=info msg="Endpoints found after scan" endpoints="[0xc0000c4900]" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.941403529Z" level=info msg="Attaching endpoint" endpoint-type=virtual hotplug=false name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.941403529Z" level=info msg="Attaching endpoint" endpoint-type=virtual hotplug=false name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora kata[21060]: time="2022-04-22T05:17:07.942002856Z" level=info msg="connect TCFilter to VM network" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:07 fedora containerd[712]: time="2022-04-22T05:17:07.942002856Z" level=info msg="connect TCFilter to VM network" name=containerd-shim-v2 pid=21060 sandbox=3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b source=virtcontainers subsystem=network
Apr 22 05:17:08 fedora containerd[712]: time="2022-04-22T05:17:08.958485729Z" level=debug msg="garbage collected" d=23.372077ms
...
Apr 22 05:21:24 fedora containerd[712]: time="2022-04-22T05:21:24.958136818Z" level=debug msg="event forwarded" ns=k8s.io topic=/tasks/exec-started type=containerd.events.TaskExecStarted
Apr 22 05:21:24 fedora containerd[712]: time="2022-04-22T05:21:24.962335953Z" level=debug msg="ExecSync for \"05f2d45a0a454256f13627a2d8804f45cb7fb8fd8a9780d0e68e184a8376b42c\" returns with exit code 0"
Apr 22 05:21:24 fedora containerd[712]: time="2022-04-22T05:21:24.966891999Z" level=debug msg="Finish piping \"stderr\" of container exec \"afcf6fd85d7c32bf8ab614a975672fbe70de596b16ad4096bd01b6f2de1d2ed4\""
Apr 22 05:21:24 fedora containerd[712]: time="2022-04-22T05:21:24.967086406Z" level=debug msg="Finish piping \"stdout\" of container exec \"afcf6fd85d7c32bf8ab614a975672fbe70de596b16ad4096bd01b6f2de1d2ed4\""
Apr 22 05:21:24 fedora containerd[712]: time="2022-04-22T05:21:24.967864549Z" level=debug msg="Exec process \"afcf6fd85d7c32bf8ab614a975672fbe70de596b16ad4096bd01b6f2de1d2ed4\" exits with exit code 0 and error <nil>"
Apr 22 05:21:24 fedora containerd[712]: time="2022-04-22T05:21:24.967986310Z" level=debug msg="Stream pipe for exec process \"afcf6fd85d7c32bf8ab614a975672fbe70de596b16ad4096bd01b6f2de1d2ed4\" done"
Apr 22 05:21:25 fedora containerd[712]: time="2022-04-22T05:21:25.017001909Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nginx-kata,Uid:1643de63-a84a-4636-a561-416f4541537c,Namespace:default,Attempt:0,}"
Apr 22 05:21:25 fedora containerd[712]: time="2022-04-22T05:21:25.017375545Z" level=debug msg="Sandbox config &PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:nginx-kata,Uid:1643de63-a84a-4636-a561-416f4541537c,Namespace:default,Attempt:0,},Hostname:nginx-kata,LogDirectory:/var/log/pods/default_nginx-kata_1643de63-a84a-4636-a561-416f4541537c,DnsConfig:&DNSConfig{Servers:[10.96.0.10],Searches:[default.svc.cluster.local svc.cluster.local cluster.local ],Options:[ndots:5],},PortMappings:[]*PortMapping{},Labels:map[string]string{io.kubernetes.pod.name: nginx-kata,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1643de63-a84a-4636-a561-416f4541537c,},Annotations:map[string]string{cni.projectcalico.org/containerID: 3a0b9ebe70d0932d6d1d4e6841ba2ec9a7f9d001f531be6261392dcd6e2ffd1b,cni.projectcalico.org/podIP: ,cni.projectcalico.org/podIPs: ,kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"nginx-kata\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\"}],\"runtimeClassName\":\"kata\"}}\n,kubernetes.io/config.seen: 2022-04-22T05:17:07.057930061Z,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1643de63_a84a_4636_a561_416f4541537c.slice,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:runtime/default,RunAsGroup:nil,Seccomp:&SecurityProfile{ProfileType:RuntimeDefault,LocalhostRef:,},Apparmor:nil,},Sysctls:map[string]string{},},}"
Apr 22 05:21:25 fedora containerd[712]: time="2022-04-22T05:21:25.017547459Z" level=debug msg="Generated id \"833b15b1c7bd01a1113424ab9d324973ddd83ff1df338853e371882852212c78\" for sandbox \"nginx-kata_default_1643de63-a84a-4636-a561-416f4541537c_0\""
Apr 22 05:21:25 fedora containerd[712]: time="2022-04-22T05:21:25.017727502Z" level=debug msg="Use OCI {Type:io.containerd.kata.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} for sandbox \"833b15b1c7bd01a1113424ab9d324973ddd83ff1df338853e371882852212c78\""

@liubin
Copy link
Member

liubin commented Apr 22, 2022

Hi, @surajssd thank you. You need to create an issue to describe your problem and add Fixes: #xxx in your commit message.
Please see here for more info.

This commit updates the "Run Kata Containers with Kubernetes" to include
cgroupDriver configuration via "KubeletConfiguration". Without this
setting kubeadm defaults to systemd cgroupDriver. Containerd with Kata
cannot spawn conntainers with systemd cgroup driver.

Fixes: kata-containers#4262

Signed-off-by: Suraj Deshmukh <suraj.deshmukh@microsoft.com>
@surajssd
Copy link
Contributor Author

@liubin done.

@GabyCT
Copy link
Contributor

GabyCT commented May 18, 2022

/test

@surajssd
Copy link
Contributor Author

surajssd commented Jun 1, 2022

cc: @GabyCT

@liubin
Copy link
Member

liubin commented Jun 9, 2022

Can anyone TAL @jodh-intel @bergwolf @fidencio @lifupan @amshinde @GabyCT

@snir911
Copy link
Member

snir911 commented Jun 9, 2022

HI @surajssd , AFAIR it should work when sandbox_cgroup_only=true is set in kata's configuration.toml ( if it does work, worth mentioning it's needed only when it set to false), can you give it a test? (it was defiantly tested with crio, I'n not sure about containerd)
Related to: #2959

@GabyCT GabyCT merged commit b2c0387 into kata-containers:main Jun 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/small Small and simple task
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants