Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized #285

Closed
pradeep-hegde opened this issue Feb 15, 2023 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pradeep-hegde
Copy link

Describe the bug
Pods are unable to reach API server and failing to come up. Although Calico for windows nodes is up and running, pods are failing to come up as container networking is not ready

Cluster info:-
K8s is deployed using kubeadm and windows nodes were added following sigwindowstools guide

NAME             STATUS     ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                       KERNEL-VERSION      CONTAINER-RUNTIME
172-50-172-180   Ready      <none>          3d15h   v1.24.7   172.50.172.180   172.50.172.180   Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-173-194   Ready      <none>          3d15h   v1.24.7   172.50.173.194   172.50.173.194   Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-175-141   Ready      <none>          3d15h   v1.24.7   172.50.175.141   172.50.175.141   Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-175-211   Ready      <none>          3d15h   v1.24.7   172.50.175.211   172.50.175.211   Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-175-42    NotReady   <none>          3d15h   v1.24.7   172.50.175.42    172.50.175.42    Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-176-236   Ready      <none>          3d15h   v1.24.7   172.50.176.236   172.50.176.236   Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-177-105   Ready      <none>          3d15h   v1.24.7   172.50.177.105   172.50.177.105   Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-177-84    Ready      <none>          3d15h   v1.24.7   172.50.177.84    172.50.177.84    Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172-50-177-94    Ready      <none>          3d15h   v1.24.7   172.50.177.94    172.50.177.94    Windows Server 2019 Standard   10.0.17763.3532     containerd://1.6.8
172.50.172.107   Ready      control-plane   3d15h   v1.24.7   172.50.172.107   172.50.172.107   Ubuntu 20.04.5 LTS             5.4.0-139-generic   containerd://1.6.8
172.50.176.54    Ready      control-plane   3d15h   v1.24.7   172.50.176.54    172.50.176.54    Ubuntu 20.04.5 LTS             5.4.0-139-generic   containerd://1.6.8
172.50.176.62    Ready      control-plane   3d15h   v1.24.7   172.50.176.62    172.50.176.62    Ubuntu 20.04.5 LTS             5.4.0-139-generic   containerd://1.6.8

Other info:

System pod info:-

kube-system         calico-kube-controllers-687d48f7b5-njvb5                          1/1     Running             1 (46h ago)        3d15h   10.244.132.136   172.50.172.107   <none>           <none>
kube-system         calico-node-lvtpq                                                 1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         calico-node-m6mmd                                                 1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         calico-node-w59dh                                                 1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         calico-node-windows-27kjq                                         2/2     Running             4 (46h ago)        3d15h   172.50.175.42    172-50-175-42    <none>           <none>
kube-system         calico-node-windows-bsx8d                                         2/2     Running             4 (46h ago)        3d15h   172.50.172.180   172-50-172-180   <none>           <none>
kube-system         calico-node-windows-gmxp7                                         2/2     Running             4 (46h ago)        3d15h   172.50.175.211   172-50-175-211   <none>           <none>
kube-system         calico-node-windows-gpdsz                                         2/2     Running             4 (46h ago)        3d15h   172.50.177.84    172-50-177-84    <none>           <none>
kube-system         calico-node-windows-gwjdc                                         2/2     Running             4 (46h ago)        3d15h   172.50.177.94    172-50-177-94    <none>           <none>
kube-system         calico-node-windows-m72zk                                         2/2     Running             4 (46h ago)        3d15h   172.50.175.141   172-50-175-141   <none>           <none>
kube-system         calico-node-windows-mkxcd                                         2/2     Running             4 (46h ago)        3d15h   172.50.177.105   172-50-177-105   <none>           <none>
kube-system         calico-node-windows-qn767                                         2/2     Running             4 (46h ago)        3d15h   172.50.173.194   172-50-173-194   <none>           <none>
kube-system         calico-node-windows-qpxll                                         2/2     Running             4 (46h ago)        3d15h   172.50.176.236   172-50-176-236   <none>           <none>
kube-system         coredns-6d4b75cb6d-625cq                                          1/1     Running             1 (46h ago)        3d15h   10.244.132.135   172.50.172.107   <none>           <none>
kube-system         coredns-6d4b75cb6d-ww5pc                                          1/1     Running             1 (46h ago)        3d15h   10.244.132.137   172.50.172.107   <none>           <none>
kube-system         etcd-172.50.172.107                                               1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         etcd-172.50.176.54                                                1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         etcd-172.50.176.62                                                1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         haproxy-172.50.172.107                                            1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         haproxy-172.50.176.54                                             1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         haproxy-172.50.176.62                                             1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         keepalived-172.50.172.107                                         1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         keepalived-172.50.176.54                                          1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         keepalived-172.50.176.62                                          1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         kube-apiserver-172.50.172.107                                     1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         kube-apiserver-172.50.176.54                                      1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         kube-apiserver-172.50.176.62                                      1/1     Running             2 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         kube-controller-manager-172.50.172.107                            1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         kube-controller-manager-172.50.176.54                             1/1     Running             2 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         kube-controller-manager-172.50.176.62                             1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         kube-proxy-dr9xb                                                  1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         kube-proxy-r8rc7                                                  1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         kube-proxy-tlpkg                                                  1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         kube-proxy-windows-4bn7w                                          1/1     Running             1 (46h ago)        3d15h   172.50.177.84    172-50-177-84    <none>           <none>
kube-system         kube-proxy-windows-6cwhm                                          1/1     Running             1 (46h ago)        3d15h   172.50.176.236   172-50-176-236   <none>           <none>
kube-system         kube-proxy-windows-8rk86                                          1/1     Running             1 (46h ago)        3d15h   172.50.175.211   172-50-175-211   <none>           <none>
kube-system         kube-proxy-windows-bhkrw                                          1/1     Running             1 (46h ago)        3d15h   172.50.175.42    172-50-175-42    <none>           <none>
kube-system         kube-proxy-windows-dd55t                                          1/1     Running             1 (46h ago)        3d15h   172.50.175.141   172-50-175-141   <none>           <none>
kube-system         kube-proxy-windows-fz2db                                          1/1     Running             2 (46h ago)        3d15h   172.50.173.194   172-50-173-194   <none>           <none>
kube-system         kube-proxy-windows-kk49j                                          1/1     Running             1 (46h ago)        3d15h   172.50.172.180   172-50-172-180   <none>           <none>
kube-system         kube-proxy-windows-ng6bw                                          1/1     Running             1 (46h ago)        3d15h   172.50.177.94    172-50-177-94    <none>           <none>
kube-system         kube-proxy-windows-rzknf                                          1/1     Running             1 (46h ago)        3d15h   172.50.177.105   172-50-177-105   <none>           <none>
kube-system         kube-scheduler-172.50.172.107                                     1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         kube-scheduler-172.50.176.54                                      1/1     Running             2 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         kube-scheduler-172.50.176.62                                      1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
kube-system         snapshot-controller-697f76d5b9-q25s9                              1/1     Running             1 (46h ago)        3d15h   10.244.132.139   172.50.172.107   <none>           <none>
kube-system         snapshot-controller-697f76d5b9-ql6kp                              1/1     Running             1 (46h ago)        3d15h   10.244.92.71     172.50.176.54    <none>           <none>
kube-system         snapshot-validation-deployment-688bc6f7-lh74n                     1/1     Running             1 (46h ago)        3d15h   10.244.92.70     172.50.176.54    <none>           <none>
kube-system         snapshot-validation-deployment-688bc6f7-qpfw4                     1/1     Running             1 (46h ago)        3d15h   10.244.92.72     172.50.176.54    <none>           <none>
kube-system         snapshot-validation-deployment-688bc6f7-xpknr                     1/1     Running             1 (46h ago)        3d15h   10.244.132.138   172.50.172.107   <none>           <none>
kube-system         vsphere-cloud-controller-manager-64bkj                            1/1     Running             1 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
kube-system         vsphere-cloud-controller-manager-j6jr6                            1/1     Running             1 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
kube-system         vsphere-cloud-controller-manager-zcjwz                            1/1     Running             1 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
tigera-operator     tigera-operator-5d55d44bd9-6ssd9                                  1/1     Running             712 (6m47s ago)    3d15h   172.50.176.54    172.50.176.54    <none>           <none>
vmware-system-csi   vsphere-csi-controller-6b5779fc8-86h5m                            7/7     Running             7 (46h ago)        3d15h   10.244.92.73     172.50.176.54    <none>           <none>
vmware-system-csi   vsphere-csi-controller-6b5779fc8-qrtp5                            7/7     Running             7 (46h ago)        3d15h   10.244.21.194    172.50.176.62    <none>           <none>
vmware-system-csi   vsphere-csi-controller-6b5779fc8-w6vht                            7/7     Running             7 (46h ago)        3d15h   10.244.132.140   172.50.172.107   <none>           <none>
vmware-system-csi   vsphere-csi-node-27xrz                                            3/3     Running             4 (46h ago)        3d15h   172.50.176.54    172.50.176.54    <none>           <none>
vmware-system-csi   vsphere-csi-node-q8kcf                                            3/3     Running             6 (46h ago)        3d15h   172.50.172.107   172.50.172.107   <none>           <none>
vmware-system-csi   vsphere-csi-node-rdvbm                                            3/3     Running             7 (46h ago)        3d15h   172.50.176.62    172.50.176.62    <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-725b8                                    2/3     CrashLoopBackOff    3 (46h ago)        3d15h   10.244.70.39     172-50-175-42    <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-7m5w8                                    1/3     CrashLoopBackOff    1492 (38s ago)     3d15h   10.244.197.166   172-50-175-141   <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-9ggck                                    1/3     CrashLoopBackOff    1489 (92s ago)     3d15h   10.244.151.231   172-50-177-84    <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-f5snf                                    2/3     CrashLoopBackOff    1488 (6s ago)      3d15h   10.244.123.167   172-50-172-180   <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-kcvjp                                    1/3     CrashLoopBackOff    1488 (30s ago)     3d15h   10.244.170.168   172-50-176-236   <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-qqrl7                                    1/3     CrashLoopBackOff    1483 (4m8s ago)    3d15h   10.244.109.39    172-50-173-194   <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-r2mxn                                    1/3     CrashLoopBackOff    1490 (63s ago)     3d15h   10.244.37.103    172-50-177-105   <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-rrrwv                                    1/3     CrashLoopBackOff    1490 (14s ago)     3d15h   10.244.171.168   172-50-175-211   <none>           <none>
vmware-system-csi   vsphere-csi-node-windows-vv4dx                                    1/3     CrashLoopBackOff    1485 (2m58s ago)   3d15h   10.244.83.230    172-50-177-94    <none>           <none>

Pod events:-

Pod/vsphere-csi-node-windows-725b8                                     vmware-system-csi    Warning    NetworkNotReady                          kubelet                                                                                              2023-02-11 11:08:02            133        network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized 
Pod/vsphere-csi-node-windows-725b8                                     vmware-system-csi    Warning    Unhealthy                                kubelet                                                                                              2023-02-11 11:03:44            1          Liveness probe failed: F0211 03:03:44.468647    7960 main.go:160] Kubelet plugin registration hasn't succeeded yet, file=C:\var\lib\kubelet\plugins\csi.vsphere.vmware.com\registration doesn't exist.
 
Pod/vsphere-csi-node-windows-725b8                                     vmware-system-csi    Warning    Unhealthy                                kubelet                                                                                              2023-02-11 11:03:34            1          Liveness probe failed: F0211 03:03:34.471332    8056 main.go:160] Kubelet plugin registration hasn't succeeded yet, file=C:\var\lib\kubelet\plugins\csi.vsphere.vmware.com\registration doesn't exist.

Calico node logs:-

calico-node-windows-27kjq_kube-system_calico-node-felix-9b6c941864a85413f6c979c4960151d4a600a2fbbde95af4f6d0cc67e02d4e7d.log: 2023-02-11T03:04:43.9569695-08:00 stdout F 2023-02-11 03:04:43.956 [INFO][4012] felix/calc_graph.go 462: Local endpoint updated id=WorkloadEndpoint(node=172-50-175-42, orchestrator=k8s, workload=vmware-system-csi/vsphere-csi-node-windows-725b8, name=eth0)
calico-node-windows-27kjq_kube-system_calico-node-felix-9b6c941864a85413f6c979c4960151d4a600a2fbbde95af4f6d0cc67e02d4e7d.log: 2023-02-11T03:04:43.9674396-08:00 stdout F 2023-02-11 03:04:43.967 [INFO][4012] felix/win_dataplane.go 255: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"vmware-system-csi/vsphere-csi-node-windows-725b8" endpoint_id:"eth0" > endpoint:<state:"active" name:"cali576c6c056bf" profile_ids:"kns.vmware-system-csi" profile_ids:"ksa.vmware-system-csi.vsphere-csi-node" ipv4_nets:"10.244.70.39/32" > 
calico-node-windows-27kjq_kube-system_calico-node-felix-9b6c941864a85413f6c979c4960151d4a600a2fbbde95af4f6d0cc67e02d4e7d.log: 2023-02-11T03:04:43.9674396-08:00 stdout F 2023-02-11 03:04:43.967 [INFO][4012] felix/endpoint_mgr.go 139: Processing WorkloadEndpointUpdate workloadEndpointId=orchestrator_id:"k8s" workload_id:"vmware-system-csi/vsphere-csi-node-windows-725b8" endpoint_id:"eth0" 
calico-node-windows-27kjq_kube-system_calico-node-felix-9b6c941864a85413f6c979c4960151d4a600a2fbbde95af4f6d0cc67e02d4e7d.log: 2023-02-11T03:04:43.9818085-08:00 stdout F 2023-02-11 03:04:43.981 [WARNING][4012] felix/endpoint_mgr.go 351: Failed to look up HNS endpoint for workload id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"vmware-system-csi/vsphere-csi-node-windows-725b8", EndpointId:"eth0"}
calico-node-windows-27kjq_kube-system_calico-node-felix-9b6c941864a85413f6c979c4960151d4a600a2fbbde95af4f6d0cc67e02d4e7d.log: 2023-02-11T03:04:49.0749373-08:00 stdout F 2023-02-11 03:04:49.074 [WARNING][4012] felix/endpoint_mgr.go 351: Failed to look up HNS endpoint for workload id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"vmware-system-csi/vsphere-csi-node-windows-725b8", EndpointId:"eth0"}

Kubelet logs:-

kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.INFO.20230209-102551.1120: I0209 10:34:10.261403    1120 fake_topology_manager.go:58] "AddContainer" pod="vmware-system-csi/vsphere-csi-node-windows-725b8" containerName="liveness-probe" containerID="01ccaef7283c88c446c3a79d18f1923a9b16b93ff91f64bcd157db0d3e07203c"
kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.ERROR.20230211-030253.5512: E0211 03:03:11.629393    5512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18465924708dc7277aa6a3f9061e8a276c2b380d4bdc45dc953c0146342af6f5\": plugin type=\"calico\" failed (add): failed to create remote HNSEndpoint Calico_ep: failed during hnsCallRawResponse: hnsCall failed in Win32: The object already exists. (0x1392)" pod="vmware-system-csi/vsphere-csi-node-windows-725b8"
kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.ERROR.20230211-030253.5512: E0211 03:03:11.629946    5512 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18465924708dc7277aa6a3f9061e8a276c2b380d4bdc45dc953c0146342af6f5\": plugin type=\"calico\" failed (add): failed to create remote HNSEndpoint Calico_ep: failed during hnsCallRawResponse: hnsCall failed in Win32: The object already exists. (0x1392)" pod="vmware-system-csi/vsphere-csi-node-windows-725b8"
kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.ERROR.20230211-030253.5512: E0211 03:03:11.632233    5512 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"vsphere-csi-node-windows-725b8_vmware-system-csi(457c678e-5800-4af0-af44-a36dbbdc0f35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"vsphere-csi-node-windows-725b8_vmware-system-csi(457c678e-5800-4af0-af44-a36dbbdc0f35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18465924708dc7277aa6a3f9061e8a276c2b380d4bdc45dc953c0146342af6f5\\\": plugin type=\\\"calico\\\" failed (add): failed to create remote HNSEndpoint Calico_ep: failed during hnsCallRawResponse: hnsCall failed in Win32: The object already exists. (0x1392)\"" pod="vmware-system-csi/vsphere-csi-node-windows-725b8" podUID=457c678e-5800-4af0-af44-a36dbbdc0f35
kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.ERROR.20230211-030253.5512: E0211 03:03:25.286073    5512 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"vsphere-csi-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=vsphere-csi-node pod=vsphere-csi-node-windows-725b8_vmware-system-csi(457c678e-5800-4af0-af44-a36dbbdc0f35)\"" pod="vmware-system-csi/vsphere-csi-node-windows-725b8" podUID=457c678e-5800-4af0-af44-a36dbbdc0f35
kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.ERROR.20230211-030253.5512: E0211 03:03:26.709492    5512 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"vsphere-csi-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=vsphere-csi-node pod=vsphere-csi-node-windows-725b8_vmware-system-csi(457c678e-5800-4af0-af44-a36dbbdc0f35)\"" pod="vmware-system-csi/vsphere-csi-node-windows-725b8" podUID=457c678e-5800-4af0-af44-a36dbbdc0f35
kubelet.exe.172-50-175-42.WORKGROUP_172-50-175-42$.log.ERROR.20230211-030253.5512: E0211 03:03:40.153528    5512 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="vmware-system-csi/vsphere-csi-node-windows-725b8" podUID=457c678e-5800-4af0-af44-a36dbbdc0f35

To Reproduce
Steps to reproduce the behavior:
The issue is observed when windows nodes are restarted unplanned

Expected behavior
Calico in windows nodes to recover networking and pods to come up NetworkReady

Kubernetes (please complete the following information):

  • Windows Server version: Windows Server 2019 Standard
  • Kubernetes Version: v1.24.7
  • CNI: Calico for windows (hostprocess containers)
  • Tigera operator: quay.io/tigera/operator:v1.28.5
  • Calico for linux: calico/node:v3.24.0, calico/kube-controllers:v3.24.0
  • Calico for windows: sigwindowstools/calico-node:v3.24.5-hostprocess

Additional context
Add any other context about the problem here.

@pradeep-hegde
Copy link
Author

cc: @jayunit100 @knabben

@pramodsharma62
Copy link

Same issue here!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 20, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants