You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
K8sUpdates Tests upgrade and downgrade from a Cilium stable image to master
Failure Output
FAIL: Expected
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Expected
<int>: 1
to be ==
<int>: 0
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/updates.go:127
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-m9bjp cilium-pjjrn]
Netpols loaded:
CiliumNetworkPolicies loaded: default::l7-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
app1-786c6d794d-jzkv5 false false
app1-786c6d794d-nljl5 false false
migrate-svc-client-f9kdt false false
migrate-svc-client-nsplf false false
migrate-svc-client-w6xzs false false
migrate-svc-server-cg8k6 false false
coredns-7c74c644b-4mxnm false false
app2-58757b7dd5-mtrzj false false
app3-5d69599cdd-slbvp false false
migrate-svc-client-7sbcj false false
migrate-svc-client-pjkhf false false
migrate-svc-server-8lzdw false false
migrate-svc-server-ccsdt false false
Cilium agent 'cilium-m9bjp': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 60 Failed 0
Cilium agent 'cilium-pjjrn': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0
Standard Error
Click to show.
23:09:10 STEP: Running BeforeAll block for EntireTestsuite K8sUpdates
23:09:10 STEP: Ensuring the namespace kube-system exists
23:09:10 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
23:09:10 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
23:09:11 STEP: Waiting for pods to be terminated
23:09:23 STEP: Deleting Cilium and CoreDNS
23:09:23 STEP: Waiting for pods to be terminated
23:09:23 STEP: Cleaning Cilium state (cfeef87ece75deeae6f6d6af5701ed27a81e608c)
23:09:23 STEP: Cleaning up Cilium components
23:09:24 STEP: Waiting for Cilium to become ready
23:10:20 STEP: Cleaning Cilium state (v1.11)
23:10:20 STEP: Cleaning up Cilium components
23:10:20 STEP: Waiting for Cilium to become ready
23:10:46 STEP: Deploying Cilium 1.11
23:10:46 STEP: Waiting for Cilium to become ready
23:11:57 STEP: Validating Cilium Installation
23:11:57 STEP: Performing Cilium controllers preflight check
23:11:57 STEP: Performing Cilium health check
23:11:57 STEP: Performing Cilium status preflight check
23:11:57 STEP: Checking whether host EP regenerated
23:12:04 STEP: Performing Cilium service preflight check
23:12:10 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-tdqqf [{0xc002762300 0xc000c4e080} {0xc002762440 0xc000c4e088} {0xc002762540 0xc000c4e090} {0xc002762640 0xc000c4e098} {0xc002762740 0xc000c4e0a0} {0xc002762840 0xc000c4e0a8}] map[10.101.86.246:443:[0.0.0.0:0 (4) [ClusterIP, non-routable] 192.168.56.12:4244 (4) 192.168.56.11:4244 (4)] 10.104.62.11:9090:[10.0.0.129:9090 (6) 0.0.0.0:0 (6) [ClusterIP, non-routable]] 10.106.246.181:3000:[0.0.0.0:0 (3) [ClusterIP, non-routable]] 10.96.0.10:53:[0.0.0.0:0 (1) [ClusterIP, non-routable] 10.0.1.249:53 (1)] 10.96.0.10:9153:[10.0.1.249:9153 (2) 0.0.0.0:0 (2) [ClusterIP, non-routable]] 10.96.0.1:443:[192.168.56.11:6443 (5) 0.0.0.0:0 (5) [ClusterIP, non-routable]]]}: Could not match cilium service backend address 10.0.1.249:53 with k8s endpoint
23:12:10 STEP: Performing Cilium status preflight check
23:12:10 STEP: Performing Cilium health check
23:12:10 STEP: Checking whether host EP regenerated
23:12:10 STEP: Performing Cilium controllers preflight check
23:12:18 STEP: Performing Cilium service preflight check
23:12:18 STEP: Performing K8s service preflight check
23:12:24 STEP: Waiting for cilium-operator to be ready
23:12:24 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
23:12:24 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
23:12:24 STEP: Cilium "1.11" is installed and running
23:12:24 STEP: Restarting DNS Pods
23:12:31 STEP: Waiting for kube-dns to be ready
23:12:31 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
23:12:31 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
23:12:31 STEP: Running kube-dns preflight check
23:12:38 STEP: Performing K8s service preflight check
23:12:38 STEP: Creating some endpoints and L7 policy
23:12:38 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp")
23:12:44 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp") => <nil>
23:12:52 STEP: Creating service and clients for migration
23:12:52 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server")
23:12:54 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server") => <nil>
23:12:54 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client")
23:12:57 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client") => <nil>
23:12:57 STEP: Validate that endpoints are ready before making any connection
23:13:00 STEP: Waiting for kube-dns to be ready
23:13:00 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
23:13:00 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
23:13:00 STEP: Running kube-dns preflight check
23:13:07 STEP: Performing K8s service preflight check
23:13:08 STEP: Making L7 requests between endpoints
23:13:08 STEP: No interrupts in migrated svc flows
23:13:08 STEP: Install Cilium pre-flight check DaemonSet
23:13:08 STEP: Waiting for all cilium pre-flight pods to be ready
23:13:08 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-pre-flight-check")
23:13:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-pre-flight-check") => <nil>
23:13:18 STEP: Removing Cilium pre-flight check DaemonSet
23:13:18 STEP: Waiting for Cilium to become ready
23:13:19 STEP: Upgrading Cilium to 1.11.90
23:13:19 STEP: Validating pods have the right image version upgraded
23:13:27 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
23:13:58 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
23:13:58 STEP: Checking that installed image is "cfeef87ece75deeae6f6d6af5701ed27a81e608c"
23:13:58 STEP: Waiting for Cilium to become ready
23:13:59 STEP: Validating Cilium Installation
23:13:59 STEP: Performing Cilium controllers preflight check
23:13:59 STEP: Performing Cilium health check
23:13:59 STEP: Performing Cilium status preflight check
23:13:59 STEP: Checking whether host EP regenerated
23:14:06 STEP: Performing Cilium service preflight check
23:14:06 STEP: Performing K8s service preflight check
23:14:07 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-pjjrn': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
23:14:07 STEP: Performing Cilium controllers preflight check
23:14:07 STEP: Performing Cilium status preflight check
23:14:07 STEP: Performing Cilium health check
23:14:07 STEP: Checking whether host EP regenerated
23:14:14 STEP: Performing Cilium service preflight check
23:14:14 STEP: Performing K8s service preflight check
23:14:15 STEP: Performing Cilium controllers preflight check
23:14:15 STEP: Performing Cilium status preflight check
23:14:15 STEP: Performing Cilium health check
23:14:15 STEP: Checking whether host EP regenerated
23:14:23 STEP: Performing Cilium service preflight check
23:14:23 STEP: Performing K8s service preflight check
23:14:24 STEP: Performing Cilium status preflight check
23:14:24 STEP: Performing Cilium health check
23:14:24 STEP: Performing Cilium controllers preflight check
23:14:24 STEP: Checking whether host EP regenerated
23:14:31 STEP: Performing Cilium service preflight check
23:14:31 STEP: Performing K8s service preflight check
23:14:38 STEP: Waiting for cilium-operator to be ready
23:14:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
23:14:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
23:14:38 STEP: Validate that endpoints are ready before making any connection
23:14:40 STEP: Waiting for kube-dns to be ready
23:14:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
23:14:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
23:14:40 STEP: Running kube-dns preflight check
23:14:47 STEP: Performing K8s service preflight check
23:14:48 STEP: Making L7 requests between endpoints
23:14:48 STEP: No interrupts in migrated svc flows
FAIL: Expected
<int>: 1
to be ==
<int>: 0
=== Test Finished at 2023-06-09T23:14:51Z====
23:14:51 STEP: Running JustAfterEach block for EntireTestsuite K8sUpdates
===================== TEST FAILED =====================
23:14:52 STEP: Running AfterFailed block for EntireTestsuite K8sUpdates
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-d69c97b9b-bkdx6 0/1 Running 0 98m 10.0.0.106 k8s2 <none> <none>
cilium-monitoring prometheus-655fb888d7-8cb9l 1/1 Running 0 98m 10.0.0.129 k8s2 <none> <none>
default app1-786c6d794d-jzkv5 2/2 Running 0 2m19s 10.0.0.213 k8s1 <none> <none>
default app1-786c6d794d-nljl5 2/2 Running 0 2m19s 10.0.0.200 k8s1 <none> <none>
default app2-58757b7dd5-mtrzj 1/1 Running 0 2m19s 10.0.0.238 k8s1 <none> <none>
default app3-5d69599cdd-slbvp 1/1 Running 0 2m19s 10.0.0.170 k8s1 <none> <none>
default migrate-svc-client-7sbcj 1/1 Running 0 2m3s 10.0.1.239 k8s2 <none> <none>
default migrate-svc-client-f9kdt 1/1 Running 0 2m3s 10.0.1.106 k8s2 <none> <none>
default migrate-svc-client-nsplf 1/1 Running 0 2m3s 10.0.0.63 k8s1 <none> <none>
default migrate-svc-client-pjkhf 1/1 Running 0 2m3s 10.0.0.133 k8s1 <none> <none>
default migrate-svc-client-w6xzs 1/1 Running 0 2m3s 10.0.1.182 k8s2 <none> <none>
default migrate-svc-server-8lzdw 1/1 Running 0 2m5s 10.0.1.80 k8s2 <none> <none>
default migrate-svc-server-ccsdt 1/1 Running 0 2m5s 10.0.0.143 k8s1 <none> <none>
default migrate-svc-server-cg8k6 1/1 Running 0 2m5s 10.0.1.147 k8s2 <none> <none>
kube-system cilium-m9bjp 1/1 Running 0 91s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-5fc8b75d8c-5shzh 1/1 Running 0 98s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-5fc8b75d8c-t29q2 1/1 Running 0 98s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-pjjrn 1/1 Running 0 96s 192.168.56.12 k8s2 <none> <none>
kube-system coredns-7c74c644b-4mxnm 1/1 Running 0 2m33s 10.0.0.253 k8s1 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 103m 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 103m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 5 103m 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-tvdlh 1/1 Running 0 102m 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-zwllw 1/1 Running 0 99m 192.168.56.12 k8s2 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 5 103m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-mnhw2 1/1 Running 0 98m 192.168.56.12 k8s2 <none> <none>
kube-system log-gatherer-xxq2j 1/1 Running 0 98m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-k7dvd 1/1 Running 0 99m 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-x9lfk 1/1 Running 0 99m 192.168.56.11 k8s1 <none> <none>
Stderr:
Fetching command output from pods [cilium-m9bjp cilium-pjjrn]
cmd: kubectl exec -n kube-system cilium-m9bjp -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
17 Enabled Disabled 7614 k8s:id=app1 fd02::87 10.0.0.213 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
55 Disabled Disabled 25072 k8s:appSecond=true fd02::22 10.0.0.238 ready
k8s:id=app2
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
473 Disabled Disabled 55218 k8s:id=app3 fd02::15 10.0.0.170 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1034 Disabled Disabled 40538 k8s:app=migrate-svc-client fd02::73 10.0.0.133 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
1193 Disabled Disabled 4 reserved:health fd02::be 10.0.0.198 ready
1594 Enabled Disabled 7614 k8s:id=app1 fd02::f6 10.0.0.200 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1946 Disabled Disabled 1870 k8s:app=migrate-svc-server fd02::f1 10.0.0.143 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
2156 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
reserved:host
2792 Disabled Disabled 57217 k8s:io.cilium.k8s.policy.cluster=default fd02::d9 10.0.0.253 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
3553 Disabled Disabled 40538 k8s:app=migrate-svc-client fd02::79 10.0.0.63 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
Stderr:
cmd: kubectl exec -n kube-system cilium-pjjrn -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
52 Disabled Disabled 40538 k8s:app=migrate-svc-client fd02::19f 10.0.1.182 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
113 Disabled Disabled 1870 k8s:app=migrate-svc-server fd02::1df 10.0.1.80 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
536 Disabled Disabled 40538 k8s:app=migrate-svc-client fd02::18b 10.0.1.239 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
879 Disabled Disabled 1870 k8s:app=migrate-svc-server fd02::14a 10.0.1.147 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
2138 Disabled Disabled 4 reserved:health fd02::141 10.0.1.159 ready
2711 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
3266 Disabled Disabled 40538 k8s:app=migrate-svc-client fd02::16b 10.0.1.106 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
Stderr:
===================== Exiting AfterFailed =====================
23:15:35 STEP: Running AfterEach for block EntireTestsuite K8sUpdates
23:15:47 STEP: Cleaning up Cilium components
23:15:58 STEP: Waiting for Cilium to become ready
23:16:13 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|3b73460c_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip]]
23:16:15 STEP: Running AfterAll block for EntireTestsuite K8sUpdates
23:16:15 STEP: Cleaning up Cilium components
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Test Name
Failure Output
Stacktrace
Click to show.
Standard Output
Click to show.
Standard Error
Click to show.
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//50/artifact/3b73460c_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//50/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//50/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_50_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/50/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
The text was updated successfully, but these errors were encountered: