Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Error adding network: failed to Statfs" during pod startup #72044

Closed
DreadPirateShawn opened this issue Dec 14, 2018 · 24 comments
Closed

"Error adding network: failed to Statfs" during pod startup #72044

DreadPirateShawn opened this issue Dec 14, 2018 · 24 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@DreadPirateShawn
Copy link

What happened:

During pod creation, we sometimes see the following error-level kubelet logs:

E1214 03:20:14.987393    3154 cni.go:260] Error adding network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:14.987412    3154 cni.go:228] Error while adding to cni network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303227    3154 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" network for pod "my-cron-1544757600-zcpgm": NetworkPlugin cni failed to set up pod "my-cron-1544757600-zcpgm_mynamespace" network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303263    3154 kuberuntime_sandbox.go:56] CreatePodSandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" network for pod "my-cron-1544757600-zcpgm": NetworkPlugin cni failed to set up pod "my-cron-1544757600-zcpgm_mynamespace" network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303294    3154 kuberuntime_manager.go:646] createPodSandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" network for pod "my-cron-1544757600-zcpgm": NetworkPlugin cni failed to set up pod "my-cron-1544757600-zcpgm_mynamespace" network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303463    3154 pod_workers.go:186] Error syncing pod 285db220-ff4f-11e8-a630-005056aeab8e ("my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)"), skipping: failed to "CreatePodSandbox" for "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243\" network for pod \"my-cron-1544757600-zcpgm\": NetworkPlugin cni failed to set up pod \"my-cron-1544757600-zcpgm_mynamespace\" network: failed to Statfs \"/proc/93986/ns/net\": no such file or directory"

What you expected to happen:

Clean pod startup, no error-level logs.

How to reproduce it (as minimally and precisely as possible):

We've got about 5 kube crons that run every 5 minutes, across 13 regions in production -- so 18720 related pods daily -- and we see over a thousand related errors during pod creation each day.

Anything else we need to know?:

Relevant kubelet log context of a cron pod that manifested this error:

I1214 03:20:07.966521    3154 kubelet.go:1837] SyncLoop (ADD, "api"): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)"
I1214 03:20:08.007480    3154 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ssh" (UniqueName: "kubernetes.io/host-path/285db220-ff4f-11e8-a630-005056aeab8e-etc-ssh") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.007512    3154 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "configmap-mount" (UniqueName: "kubernetes.io/configmap/285db220-ff4f-11e8-a630-005056aeab8e-configmap-mount") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.007606    3154 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jim0p" (UniqueName: "kubernetes.io/secret/285db220-ff4f-11e8-a630-005056aeab8e-default-token-jim0p") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.108028    3154 reconciler.go:252] operationExecutor.MountVolume started for volume "etc-ssh" (UniqueName: "kubernetes.io/host-path/285db220-ff4f-11e8-a630-005056aeab8e-etc-ssh") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.108058    3154 reconciler.go:252] operationExecutor.MountVolume started for volume "configmap-mount" (UniqueName: "kubernetes.io/configmap/285db220-ff4f-11e8-a630-005056aeab8e-configmap-mount") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.108144    3154 operation_generator.go:558] MountVolume.SetUp succeeded for volume "etc-ssh" (UniqueName: "kubernetes.io/host-path/285db220-ff4f-11e8-a630-005056aeab8e-etc-ssh") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.108256    3154 reconciler.go:252] operationExecutor.MountVolume started for volume "default-token-jim0p" (UniqueName: "kubernetes.io/secret/285db220-ff4f-11e8-a630-005056aeab8e-default-token-jim0p") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.112092    3154 operation_generator.go:558] MountVolume.SetUp succeeded for volume "configmap-mount" (UniqueName: "kubernetes.io/configmap/285db220-ff4f-11e8-a630-005056aeab8e-configmap-mount") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.667696    3154 operation_generator.go:558] MountVolume.SetUp succeeded for volume "default-token-jim0p" (UniqueName: "kubernetes.io/secret/285db220-ff4f-11e8-a630-005056aeab8e-default-token-jim0p") pod "my-cron-1544757600-zcpgm" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:08.972176    3154 kuberuntime_manager.go:385] No sandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" can be found. Need to start a new one
2018-12-14 03:20:10.836 [INFO][93423] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"mynamespace", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"knode7", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"my-cron-1544757600-zcpgm", ContainerID:"1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092"}}
2018-12-14 03:20:10.836 [INFO][93423] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_NETNS=/proc/93228/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:10.840 [INFO][93423] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:10.908 [INFO][93450] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_NETNS=/proc/93228/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:10.922 [INFO][93423] k8s.go 267: Populated endpoint ContainerID="1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"knode7-k8s-my--cron--1544757600--zcpgm-eth0", GenerateName:"", Namespace:"mynamespace", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"knode7", ContainerID:"", Pod:"my-cron-1544757600-zcpgm", Endpoint:"eth0", IPNetworks:[]string{"10.130.132.247/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
2018-12-14 03:20:10.923 [INFO][93423] network.go 31: Setting the host side veth name to califaf83cce795 ContainerID="1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:10.924 [INFO][93423] network.go 326: Disabling IPv4 forwarding ContainerID="1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:10.968 [INFO][93423] k8s.go 294: Added Mac, interface name, and active container ID to endpoint ContainerID="1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"knode7-k8s-my--cron--1544757600--zcpgm-eth0", GenerateName:"", Namespace:"mynamespace", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"knode7", ContainerID:"1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092", Pod:"my-cron-1544757600-zcpgm", Endpoint:"eth0", IPNetworks:[]string{"10.130.132.247/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"califaf83cce795", MAC:"0e:da:a3:a7:a2:a0", Ports:[]v3.EndpointPort(nil)}}
2018-12-14 03:20:10.972 [INFO][93423] k8s.go 302: Wrote updated endpoint to datastore ContainerID="1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
I1214 03:20:11.308752    3154 kubelet.go:1882] SyncLoop (PLEG): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", event: &pleg.PodLifecycleEvent{ID:"285db220-ff4f-11e8-a630-005056aeab8e", Type:"ContainerStarted", Data:"1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092"}
I1214 03:20:12.347092    3154 kubelet.go:1882] SyncLoop (PLEG): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", event: &pleg.PodLifecycleEvent{ID:"285db220-ff4f-11e8-a630-005056aeab8e", Type:"ContainerStarted", Data:"aca17a91b4668098ff70eeb6d519ff4aa55946bbccabd30b1eea8a0cbdc4845e"}
I1214 03:20:13.368018    3154 kubelet.go:1882] SyncLoop (PLEG): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", event: &pleg.PodLifecycleEvent{ID:"285db220-ff4f-11e8-a630-005056aeab8e", Type:"ContainerDied", Data:"aca17a91b4668098ff70eeb6d519ff4aa55946bbccabd30b1eea8a0cbdc4845e"}
2018-12-14 03:20:13.433 [INFO][93845] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_NETNS=/proc/93228/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:13.502 [INFO][93861] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_NETNS=/proc/93228/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:13.512 [INFO][93861] ipam.go 1002: Releasing all IPs with handle 'mynamespace.my-cron-1544757600-zcpgm'
I1214 03:20:13.525058    3154 reconciler.go:181] operationExecutor.UnmountVolume started for volume "configmap-mount" (UniqueName: "kubernetes.io/configmap/285db220-ff4f-11e8-a630-005056aeab8e-configmap-mount") pod "285db220-ff4f-11e8-a630-005056aeab8e" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:13.525093    3154 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etc-ssh" (UniqueName: "kubernetes.io/host-path/285db220-ff4f-11e8-a630-005056aeab8e-etc-ssh") pod "285db220-ff4f-11e8-a630-005056aeab8e" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:13.525157    3154 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-jim0p" (UniqueName: "kubernetes.io/secret/285db220-ff4f-11e8-a630-005056aeab8e-default-token-jim0p") pod "285db220-ff4f-11e8-a630-005056aeab8e" (UID: "285db220-ff4f-11e8-a630-005056aeab8e")
I1214 03:20:13.525151    3154 operation_generator.go:688] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/285db220-ff4f-11e8-a630-005056aeab8e-etc-ssh" (OuterVolumeSpecName: "etc-ssh") pod "285db220-ff4f-11e8-a630-005056aeab8e" (UID: "285db220-ff4f-11e8-a630-005056aeab8e"). InnerVolumeSpecName "etc-ssh". PluginName "kubernetes.io/host-path", VolumeGidValue ""
I1214 03:20:13.525242    3154 reconciler.go:301] Volume detached for volume "etc-ssh" (UniqueName: "kubernetes.io/host-path/285db220-ff4f-11e8-a630-005056aeab8e-etc-ssh") on node "knode7" DevicePath ""
I1214 03:20:13.525545    3154 operation_generator.go:688] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285db220-ff4f-11e8-a630-005056aeab8e-configmap-mount" (OuterVolumeSpecName: "configmap-mount") pod "285db220-ff4f-11e8-a630-005056aeab8e" (UID: "285db220-ff4f-11e8-a630-005056aeab8e"). InnerVolumeSpecName "configmap-mount". PluginName "kubernetes.io/configmap", VolumeGidValue ""
I1214 03:20:13.542250    3154 operation_generator.go:688] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285db220-ff4f-11e8-a630-005056aeab8e-default-token-jim0p" (OuterVolumeSpecName: "default-token-jim0p") pod "285db220-ff4f-11e8-a630-005056aeab8e" (UID: "285db220-ff4f-11e8-a630-005056aeab8e"). InnerVolumeSpecName "default-token-jim0p". PluginName "kubernetes.io/secret", VolumeGidValue ""
2018-12-14T03:20:13.5582+00:00 knode7 [err] [my-cron] [POD] [my-cron-1544757600-zcpgm] Shutting down, got signal: Terminated
I1214 03:20:13.625484    3154 reconciler.go:301] Volume detached for volume "default-token-jim0p" (UniqueName: "kubernetes.io/secret/285db220-ff4f-11e8-a630-005056aeab8e-default-token-jim0p") on node "knode7" DevicePath ""
I1214 03:20:13.625519    3154 reconciler.go:301] Volume detached for volume "configmap-mount" (UniqueName: "kubernetes.io/configmap/285db220-ff4f-11e8-a630-005056aeab8e-configmap-mount") on node "knode7" DevicePath ""
I1214 03:20:14.386057    3154 kubelet.go:1882] SyncLoop (PLEG): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", event: &pleg.PodLifecycleEvent{ID:"285db220-ff4f-11e8-a630-005056aeab8e", Type:"ContainerDied", Data:"1388ad2ccbc22d94257e9d4b8f85f0eb3c168b1121243546a4404550b66ed092"}
I1214 03:20:14.386185    3154 kuberuntime_manager.go:403] No ready sandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" can be found. Need to start a new one
2018-12-14 03:20:14.804 [INFO][94065] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"mynamespace", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"knode7", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"my-cron-1544757600-zcpgm", ContainerID:"685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"}}
2018-12-14 03:20:14.804 [INFO][94065] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS=/proc/93986/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:14.808 [INFO][94065] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
I1214 03:20:14.814591    3154 kubelet_pods.go:1082] Killing unwanted pod "my-cron-1544757600-zcpgm"
E1214 03:20:14.818791    3154 kuberuntime_container.go:65] Can't make a ref to pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", container my-cron: selfLink was empty, can't make reference
2018-12-14 03:20:14.877 [INFO][94081] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS=/proc/93986/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
Calico CNI IPAM handle=calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243
2018-12-14 03:20:14.880 [INFO][94081] calico-ipam.go 186: Auto assigning IP ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc42000e6d0), Attrs:map[string]string(nil), Hostname:"knode7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
2018-12-14 03:20:14.882 [INFO][94081] ipam.go 250: Looking up existing affinities for host handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.882 [INFO][94081] ipam.go 271: Trying affinity for 10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.885 [INFO][94081] ipam.go 668: Attempting to assign 1 addresses from block block=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.885 [INFO][94081] ipam.go 1093: Creating new handle: calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243
2018-12-14 03:20:14.895 [INFO][94081] ipam.go 691: Writing block in order to claim IPs block=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.897 [INFO][94081] ipam.go 701: Successfully claimed IPs: [10.130.132.248] block=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.897 [INFO][94081] ipam.go 303: Block '10.130.132.192/26' provided addresses: [10.130.132.248] handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.898 [INFO][94081] ipam.go 458: Auto-assigned 1 out of 1 IPv4s: [10.130.132.248] handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" host="knode7"
2018-12-14 03:20:14.898 [INFO][94081] calico-ipam.go 214: IPAM Result ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc4201cc960)}
2018-12-14 03:20:14.899 [INFO][94065] k8s.go 267: Populated endpoint ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"knode7-k8s-my--cron--1544757600--zcpgm-eth0", GenerateName:"", Namespace:"mynamespace", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"knode7", ContainerID:"", Pod:"my-cron-1544757600-zcpgm", Endpoint:"eth0", IPNetworks:[]string{"10.130.132.248/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
2018-12-14 03:20:14.899 [INFO][94065] network.go 31: Setting the host side veth name to califaf83cce795 ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:14.899 [ERROR][94065] network.go 186: Error creating veth: failed to Statfs "/proc/93986/ns/net": no such file or directory ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:14.899 [ERROR][94065] k8s.go 280: Error setting up networking ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0" error=failed to Statfs "/proc/93986/ns/net": no such file or directory
2018-12-14 03:20:14.899 [INFO][94065] k8s.go 272: Releasing IPAM allocation(s) after failure ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0" endpointIPs=[]string{"10.130.132.248/32"}
2018-12-14 03:20:14.899 [INFO][94065] utils.go 398: Cleaning up IP allocations for failed ADD ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:14.974 [INFO][94113] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS=/proc/93986/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:14.977 [INFO][94113] calico-ipam.go 258: Releasing address using handleID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:14.977 [INFO][94113] ipam.go 1002: Releasing all IPs with handle 'calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243'
2018-12-14 03:20:14.978 [INFO][94113] ipam.go 1021: Querying block so we can release IPs by handle cidr=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:14.979 [INFO][94113] ipam.go 1041: Block has 1 IPs with the given handle cidr=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:14.979 [INFO][94113] ipam.go 1063: Updating block to release IPs cidr=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:14.981 [INFO][94113] ipam.go 1076: Successfully released IPs from block cidr=10.130.132.192/26 handle="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:14.984 [INFO][94113] calico-ipam.go 267: Released address using handleID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:14.984 [INFO][94113] calico-ipam.go 276: Releasing address using workloadID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:14.984 [INFO][94113] ipam.go 1002: Releasing all IPs with handle 'mynamespace.my-cron-1544757600-zcpgm'
E1214 03:20:14.987393    3154 cni.go:260] Error adding network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:14.987412    3154 cni.go:228] Error while adding to cni network: failed to Statfs "/proc/93986/ns/net": no such file or directory
W1214 03:20:15.099367    3154 cni.go:243] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:15.212 [INFO][94188] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:15.215 [INFO][94188] calico.go 431: Extracted identifiers ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Node="knode7" Orchestrator="k8s" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:15.216 [WARNING][94188] k8s.go 324: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:15.216 [INFO][94188] k8s.go 354: Releasing IP address(es) ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:15.294 [INFO][94213] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:20:15.296 [INFO][94213] calico-ipam.go 258: Releasing address using handleID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:15.296 [INFO][94213] ipam.go 1002: Releasing all IPs with handle 'calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243'
2018-12-14 03:20:15.298 [WARNING][94213] calico-ipam.go 265: Asked to release address but it doesn't exist. Ignoring ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:15.298 [INFO][94213] calico-ipam.go 276: Releasing address using workloadID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:20:15.298 [INFO][94213] ipam.go 1002: Releasing all IPs with handle 'mynamespace.my-cron-1544757600-zcpgm'
2018-12-14 03:20:15.300 [INFO][94188] k8s.go 358: Cleaning up netns ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:20:15.300 [INFO][94188] k8s.go 370: Teardown processing complete. ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
E1214 03:20:15.303227    3154 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" network for pod "my-cron-1544757600-zcpgm": NetworkPlugin cni failed to set up pod "my-cron-1544757600-zcpgm_mynamespace" network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303263    3154 kuberuntime_sandbox.go:56] CreatePodSandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" network for pod "my-cron-1544757600-zcpgm": NetworkPlugin cni failed to set up pod "my-cron-1544757600-zcpgm_mynamespace" network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303294    3154 kuberuntime_manager.go:646] createPodSandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" network for pod "my-cron-1544757600-zcpgm": NetworkPlugin cni failed to set up pod "my-cron-1544757600-zcpgm_mynamespace" network: failed to Statfs "/proc/93986/ns/net": no such file or directory
E1214 03:20:15.303463    3154 pod_workers.go:186] Error syncing pod 285db220-ff4f-11e8-a630-005056aeab8e ("my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)"), skipping: failed to "CreatePodSandbox" for "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243\" network for pod \"my-cron-1544757600-zcpgm\": NetworkPlugin cni failed to set up pod \"my-cron-1544757600-zcpgm_mynamespace\" network: failed to Statfs \"/proc/93986/ns/net\": no such file or directory"
W1214 03:20:15.403116    3154 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "my-cron-1544757600-zcpgm_mynamespace": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
I1214 03:20:15.408285    3154 kubelet.go:1882] SyncLoop (PLEG): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", event: &pleg.PodLifecycleEvent{ID:"285db220-ff4f-11e8-a630-005056aeab8e", Type:"ContainerDied", Data:"685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"}
W1214 03:20:15.408350    3154 pod_container_deletor.go:75] Container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" not found in pod's containers
I1214 03:30:10.775267    3154 kubelet.go:1853] SyncLoop (DELETE, "api"): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)"
I1214 03:30:10.777718    3154 kubelet.go:1847] SyncLoop (REMOVE, "api"): "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)"
I1214 03:30:10.777760    3154 kubelet.go:2041] Failed to delete pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)", err: pod not found
W1214 03:30:44.984466    3154 cni.go:243] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:30:45.060 [INFO][5696] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:30:45.063 [INFO][5696] calico.go 431: Extracted identifiers ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Node="knode7" Orchestrator="k8s" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:30:45.064 [WARNING][5696] k8s.go 324: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:30:45.064 [INFO][5696] k8s.go 354: Releasing IP address(es) ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:30:45.130 [INFO][5712] utils.go 379: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 POD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
2018-12-14 03:30:45.131 [INFO][5712] calico-ipam.go 258: Releasing address using handleID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:30:45.131 [INFO][5712] ipam.go 1002: Releasing all IPs with handle 'calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243'
2018-12-14 03:30:45.133 [WARNING][5712] calico-ipam.go 265: Asked to release address but it doesn't exist. Ignoring ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:30:45.133 [INFO][5712] calico-ipam.go 276: Releasing address using workloadID ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" HandleID="calico-k8s-network.685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Workload="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
2018-12-14 03:30:45.133 [INFO][5712] ipam.go 1002: Releasing all IPs with handle 'mynamespace.my-cron-1544757600-zcpgm'
2018-12-14 03:30:45.135 [INFO][5696] k8s.go 358: Cleaning up netns ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
2018-12-14 03:30:45.135 [INFO][5696] k8s.go 370: Teardown processing complete. ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"
W1214 03:30:45.137972    3154 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "my-cron-1544757600-zcpgm_mynamespace": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243"

Controller manager for the pod:

I1214 03:20:07.957146       1 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"mynamespace", Name:"my-cron-1544757600", UID:"285d173d-ff4f-11e8-a630-005056aeab8e", APIVersion:"batch/v1", ResourceVersion:"259923428", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-cron-1544757600-zcpgm
I1214 03:30:10.767930       1 cronjob_controller.go:381] CronJob controller is deleting Pod mynamespace/my-cron-1544757600-zcpgm

Cron yaml:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  annotations:
    kubernetes.io/change-cause: kubectl create --kubeconfig=kubeconfig --namespace=mynamespace --filename=mytemplate.yaml --record=true
  creationTimestamp: 2018-11-28T22:06:58Z
  labels:
    name: my-cron
    version: 20181128-220646
  name: my-cron
  namespace: mynamespace
  resourceVersion: "259972764"
  selfLink: /apis/batch/v1beta1/namespaces/mynamespace/cronjobs/my-cron
  uid: ecd21f6e-f359-11e8-a2a0-005056ae8c5d
spec:
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 5
  jobTemplate:
    metadata:
      creationTimestamp: null
    spec:
      template:
        metadata:
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: localhost/seccomp.json
          creationTimestamp: null
          labels:
            name: my-cron
            version: 20181128-220646
        spec:
          containers:
          - command:
            - task.py
            image: myregistry:5000/mycronimage:tag
            imagePullPolicy: IfNotPresent
            livenessProbe:
              exec:
                command:
                - /control-scripts/livenessCheck.sh
              failureThreshold: 3
              initialDelaySeconds: 30
              periodSeconds: 30
              successThreshold: 1
              timeoutSeconds: 1
            name: my-cron
            resources:
              limits:
                cpu: 1500m
                memory: 400Mi
              requests:
                cpu: 500m
                memory: 250Mi
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /tmp/ssh
              name: etc-ssh
            - mountPath: /configs
              name: configmap-mount
          dnsPolicy: ClusterFirst
          nodeSelector:
            nodetype: kube_minion
          restartPolicy: OnFailure
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
          - hostPath:
              path: /etc/ssh
              type: ""
            name: etc-ssh
          - configMap:
              defaultMode: 420
              items: ...
              name: 1f1e575e87d5988210397febc773c716
            name: configmap-mount
  schedule: '*/5 * * * *'
  successfulJobsHistoryLimit: 2
  suspend: false
status:
  lastScheduleTime: 2018-12-14T07:05:00Z

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: Bare metal
  • OS (e.g. from /etc/os-release): 16.04.1 LTS (Xenial Xerus)
  • Kernel (e.g. uname -a): 4.4.0-116-generic PreStart and PostStop event hooks #140-Ubuntu
  • Install tools: N/A
  • Others: N/A

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 14, 2018
@DreadPirateShawn
Copy link
Author

Note: This is structurally very similar to #57465. However, that issue has been fixed, and was during pod teardown. The similarity (from my perspective) is that both that issue, and this one, contain the same core "network: failed to Statfs" error, simply at the opposite ends of the pod lifecycle.

@DreadPirateShawn
Copy link
Author

/sig network

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 14, 2018
@thockin
Copy link
Member

thockin commented Dec 17, 2018

What CNI driver are you using?

@dcbw because /.*CNI.*/

@DreadPirateShawn
Copy link
Author

@blakebarnett
Copy link

We're seeing this frequently with CronJobs pods also.

@thockin thockin added the triage/unresolved Indicates an issue that can not or will not be resolved. label Mar 8, 2019
@clcollins
Copy link

We're seeing this on OpenShift Origin/OKD with the ovs-multitenant plugin. We've only seen it with CronJob pods, as far as I can tell from our logs.

@athenabot
Copy link

@dcbw
If this issue has been triaged, please comment /remove-triage unresolved.

Meta:
/athenabot mark-triage-reminder

🤖 I am a bot run by @vllry. 👩‍🔬

@caarlos0
Copy link
Contributor

caarlos0 commented May 9, 2019

Just had that issue with a Job on GKE 1.12.7-gke.7 too.

@athenabot
Copy link

@caseydavenport @dcbw
If this issue has been triaged, please comment /remove-triage unresolved.

If you aren't able to handle this issue, consider unassigning yourself and/or adding the help-wanted label.

🤖 I am a bot run by vllry. 👩‍🔬

1 similar comment
@athenabot
Copy link

@caseydavenport @dcbw
If this issue has been triaged, please comment /remove-triage unresolved.

If you aren't able to handle this issue, consider unassigning yourself and/or adding the help-wanted label.

🤖 I am a bot run by vllry. 👩‍🔬

@caseydavenport
Copy link
Member

 38 I1214 03:20:14.386185    3154 kuberuntime_manager.go:403] No ready sandbox for pod "my-cron-1544757600-zcpgm_mynamespace(285db220-ff4f-11e8-a630-005056aeab8e)" can be found. Need to start a new one
 39 2018-12-14 03:20:14.804 [INFO][94065] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"mynamespace", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"knode7", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"my-cron-1544757600-zcpgm", ContainerID:"685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b    74243"}}
 40 2018-12-14 03:20:14.804 [INFO][94065] utils.go 379: Configured environment: [CNI_COMMAND=ADD CNI_CONTAINERID=685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243 CNI_NETNS=/proc/93986/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=mynamespace;K8S_POD_NAME=my-cron-1544757600-zcpgm;K8S_POD_INFRA_CONTAINER_ID=685022f2f21c867e5c3acc0963758be7e1a1b    086d9b2048ea3a3096fa0b74243 CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin LANG=en_US LANGUAGE=en_US: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HYPERKUBE_IMAGE=myregistry:5000/hyperkube:v1.11.3-0 PAUSE_IMAGE=myregistry:5000/pause:3.1-0 API_ENDPOINT=10.130.16.16 CADVISOR_PORT=4194 KUBE_API_SECURE_PORT=6443 KUBE_CONFIG=kubeconfig KUBELET_PORT=10250 LOGLEVEL=2 P    OD_MANIFEST_DIR=/etc/kubernetes/manifests SELF_HOSTNAME=knode7 SELF_IP=10.130.16.154 NODETYPE=kube_minion ETCD_ENDPOINTS=http://etcd1:4001,http://etcd2:4001,http://etcd3:4001,http://etcd4:4001,http://etcd5:4001 KUBECONFIG=kubeconfig]
 41 2018-12-14 03:20:14.808 [INFO][94065] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="685022f2f21c867e5c3acc0963758be7e1a1b086d9b2048ea3a3096fa0b74243" Namespace="mynamespace" Pod="my-cron-1544757600-zcpgm" WorkloadEndpoint="knode7-k8s-my--cron--1544757600--zcpgm-eth0"
 42 I1214 03:20:14.814591    3154 kubelet_pods.go:1082] Killing unwanted pod "my-cron-1544757600-zcpgm"

This subset of logs is interesting - it seems to be doing the following:

  • Identifying that the sandbox is not ready, and deciding to create a new one (which will trigger CNI ADD)
  • Calling the CNI plugin (in this case, the logs are coming from Calico). You can see it is receiving the ADD call.
  • Deciding that the pod is unwanted, and killing it.

It feels to me that this is a race between k8s deciding to tear down a pod (thus deleting the netns), and the CNI ADD call initiated right before - the CNI plugin of course is expecting the netns to exist.

/kind bug
/remove-triage unresolved

@k8s-ci-robot k8s-ci-robot removed the triage/unresolved Indicates an issue that can not or will not be resolved. label Jun 24, 2019
@caseydavenport
Copy link
Member

The fact that this seems to be happening exclusively with CronJob pods might be a clue - wonder what is special about them.

For those who are seeing this - does this resolve itself? Or are the same pods hitting this scenario consistently?

Probably worth pulling in @kubernetes/sig-node-bugs as well.

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. labels Jun 24, 2019
@slyoldfox
Copy link

I upgraded to CNI 1.4.1 and EKS/kubernetes 1.13 and haven't seen this error anymore.

@caseydavenport
Copy link
Member

@DreadPirateShawn @blakebarnett @clcollins if you get a chance to try this on 1.13+, that would be useful to tease out if this may have already been fixed in a newer version of Kubernetes.

@dcbw dcbw removed their assignment Jul 11, 2019
@tcharette
Copy link

Has anyone found a fix for this? We're experiencing this on our openshift 3.11 on-prem cluster with cron jobs

@fredsted
Copy link

fredsted commented Nov 5, 2019

Getting this on cronjobs w/ Kubernetes 1.12.10. It seems to happen randomly to some nodes, after they've been running for a few weeks, and we can fix it by either restarting the node or terminating it.

I've attached kubelet output from a typical example cronjob that failed.

failing-cronjob-network-error.txt

@inductor
Copy link
Member

We've encountered this issue with CronJob on 1.15.7-gke.23

@willgleich
Copy link

I've encountered this error with Argo Cron Workflows using k8s v1.17.4 and calico v3.13.1

@caseydavenport
Copy link
Member

/unassign

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 12, 2020
@novonilc
Copy link

novonilc commented Aug 18, 2020

We are finding issues with OpenShift as well. Any potential fix ?

k8s : 1.11

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 17, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests