You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Sysdump was too big to upload, please see run URL for download)
Log Output
2023-11-21T23:28:00.0809178Z �[38;5;243m------------------------------�[0m
2023-11-21T23:28:00.0810163Z �[38;5;9m• [FAILED] [240.346 seconds]�[0m
2023-11-21T23:28:00.0811630Z �[0m[sig-node] Probing container �[38;5;9m�[1m[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]�[0m
2023-11-21T23:28:00.0813230Z �[38;5;243mtest/e2e/common/node/container_probe.go:214�[0m
2023-11-21T23:28:00.0813740Z
2023-11-21T23:28:00.0813983Z �[38;5;243mTimeline >>�[0m
2023-11-21T23:28:00.0814804Z �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m@ 11/21/23 23:23:59.671�[0m
2023-11-21T23:28:00.0815983Z Nov 21 23:23:59.671: INFO: >>> kubeConfig: /home/runner/work/cilium/cilium/_artifacts/kubeconfig.conf
2023-11-21T23:28:00.0817524Z �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m@ 11/21/23 23:23:59.672�[0m
2023-11-21T23:28:00.0819052Z �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m@ 11/21/23 23:23:59.752�[0m
2023-11-21T23:28:00.0820479Z �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m@ 11/21/23 23:23:59.756�[0m
2023-11-21T23:28:00.0821985Z �[1mSTEP:�[0m Creating pod test-webserver-cfef5e44-a482-44a2-a2ab-1c62f4c90c47 in namespace container-probe-1676 �[38;5;243m@ 11/21/23 23:23:59.76�[0m
2023-11-21T23:28:00.0823160Z Nov 21 23:27:59.802: INFO: Failed inside E2E framework:
2023-11-21T23:28:00.0824532Z k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x7faa284f3498, 0xc004fd2ea0}, {0x77d9a80?, 0xc005606340?}, {0xc0049b2570, 0x14}, {0xc005b26680, 0x33}, {0x6fd1346, 0x15}, ...)
2023-11-21T23:28:00.0825752Z test/e2e/framework/pod/wait.go:227 +0x25f
2023-11-21T23:28:00.0826798Z k8s.io/kubernetes/test/e2e/common/node.runLivenessTest({0x7faa284f3498, 0xc004fd2ea0}, 0xc0003b9c20, 0xc004047680, 0x0, 0x37e11d6000, {0x6faaf7b, 0xe})
2023-11-21T23:28:00.0828052Z test/e2e/common/node/container_probe.go:1730 +0x2f0
2023-11-21T23:28:00.0829077Z k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest({0x7faa284f3498, 0xc004fd2ea0}, 0x6faaf7b?, 0xc004047680, 0x50?, 0x3ecbda7?)
2023-11-21T23:28:00.0830024Z test/e2e/common/node/container_probe.go:1707 +0xce
2023-11-21T23:28:00.0831210Z k8s.io/kubernetes/test/e2e/common/node.glob..func2.9({0x7faa284f3498, 0xc004fd2ea0})
2023-11-21T23:28:00.0831932Z test/e2e/common/node/container_probe.go:222 +0x12c
2023-11-21T23:28:00.0832801Z �[38;5;9m[FAILED]�[0m in [It] - test/e2e/common/node/container_probe.go:1730 �[38;5;243m@ 11/21/23 23:27:59.802�[0m
2023-11-21T23:28:00.0834306Z Nov 21 23:27:59.802: INFO: Waiting up to 7m0s for all (but 0) nodes to be ready
2023-11-21T23:28:27.4707356Z �[38;5;9m• [FAILED] [302.331 seconds]�[0m
2023-11-21T23:28:27.4708302Z �[0m[sig-apps] Deployment �[38;5;9m�[1m[It] should validate Deployment Status endpoints [Conformance]�[0m
2023-11-21T23:28:27.6752759Z �[38;5;9m[FAILED] error waiting for deployment "test-deployment-kqwpn" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.November, 21, 23, 23, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.November, 21, 23, 23, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.November, 21, 23, 23, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.November, 21, 23, 23, 25, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-kqwpn-5d576bd769\" is progressing."}}, CollisionCount:(*int32)(nil)}�[0m
2023-11-21T23:28:29.7196963Z �[38;5;9m[FAILED]�[0m in [It] - test/e2e/framework/pod/resource.go:369 �[38;5;243m@ 11/21/23 23:28:29.516�[0m
2023-11-21T23:28:29.7198145Z Nov 21 23:28:29.516: INFO: Waiting up to 7m0s for all (but 0) nodes to be ready
2023-11-21T23:29:11.8893330Z Nov 21 23:29:11.706: INFO: At 2023-11-21 23:27:20 +0000 UTC - event for netserver-1: {kubelet cilium-testing-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "78e3597366566df22a8efc6e045ce87103fcd89d0aecffa297d595020896777a": plugin type="cilium-cni" failed (add): unable to create endpoint: [PUT /endpoint/{id}][429] putEndpointIdTooManyRequests
The text was updated successfully, but these errors were encountered:
learnitall
added
area/CI
Continuous Integration testing issue or flake
ci/flake
This is a known failure that occurs in the tree. Please investigate me!
labels
Nov 22, 2023
Logs of corresponding sysdumps contain a lot of putEndpointIdTooManyRequests / Unable to create endpoint messages. CNI is unable to create endpoints due to a deadlock on the agent side.
CI failure
Lot's of sub-tests failed in this workflow
Run URL
https://github.com/cilium/cilium/actions/runs/6950658033
Zip Files
kind-logs.zip
(Sysdump was too big to upload, please see run URL for download)
Log Output
The text was updated successfully, but these errors were encountered: