You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Stacktrace
/home/jenkins/workspace/cilium-v1.10-gke/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Pods are not updating
Expected
<*errors.errorString | 0xc00121b9d0>: {
s: "Cilium Pods are not updating correctly to afed209f6c31e850389a5ac7f1246855614ad408: 5m0s timeout expired",
}
to be nil
/home/jenkins/workspace/cilium-v1.10-gke/src/github.com/cilium/cilium/test/k8sT/Updates.go:129
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Unable to serve pprof API
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 18
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to update ipcache map entry on pod add
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Failed to send policy update as monitor notification
Cilium pods: [cilium-dxp2s cilium-lgvn6]
Netpols loaded:
CiliumNetworkPolicies loaded: default::l7-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
migrate-svc-client-2jkmp
migrate-svc-client-9c9wn
migrate-svc-client-wkrvh
migrate-svc-server-jrgjc
migrate-svc-server-zvpw2
kube-dns-c9488f9fb-fff6v
kube-dns-c9488f9fb-mrh8w
app2-5cc5d58844-jsjnf
app3-6c7856c5b5-w5h9d
app1-7b6ddb776f-4z69b
app1-7b6ddb776f-xvtjq
migrate-svc-client-fxscs
migrate-svc-client-j4b8x
migrate-svc-server-lsfqd
Cilium agent 'cilium-dxp2s': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 52 Failed 0
Cilium agent 'cilium-lgvn6': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 62 Failed 0
Standard Error
13:47:38 STEP: Running BeforeAll block for EntireTestsuite K8sUpdates
13:47:38 STEP: Ensuring the namespace kube-system exists
13:47:39 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:47:39 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:48:17 STEP: Deleting Cilium and CoreDNS...
13:48:24 STEP: Waiting for pods to be terminated..
13:48:25 STEP: Cleaning Cilium state (afed209f6c31e850389a5ac7f1246855614ad408)
13:48:26 STEP: Cleaning up Cilium components
13:48:36 STEP: Waiting for Cilium to become ready
13:48:52 STEP: Cleaning Cilium state (v1.9)
13:48:52 STEP: Cleaning up Cilium components
13:49:13 STEP: Waiting for Cilium to become ready
13:49:42 STEP: Deploying Cilium 1.9-dev
13:49:47 STEP: Waiting for Cilium to become ready
13:50:34 STEP: Validating Cilium Installation
13:50:34 STEP: Performing Cilium controllers preflight check
13:50:34 STEP: Performing Cilium health check
13:50:34 STEP: Performing Cilium status preflight check
13:50:38 STEP: Performing Cilium service preflight check
13:50:38 STEP: Performing K8s service preflight check
13:50:38 STEP: Waiting for cilium-operator to be ready
13:50:39 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:50:39 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:50:39 STEP: Cilium "1.9-dev" is installed and running
13:50:39 STEP: Restarting DNS Pods
13:51:16 STEP: Waiting for kube-dns to be ready
13:51:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
13:51:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
13:51:16 STEP: Running kube-dns preflight check
13:51:20 STEP: Performing K8s service preflight check
13:51:20 STEP: Creating some endpoints and L7 policy
13:51:21 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp")
13:51:41 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp") => <nil>
13:51:53 STEP: Creating service and clients for migration
13:51:53 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server")
13:52:03 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server") => <nil>
13:52:04 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client")
13:52:14 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client") => <nil>
13:52:14 STEP: Validate that endpoints are ready before making any connection
13:52:16 STEP: Waiting for kube-dns to be ready
13:52:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
13:52:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
13:52:16 STEP: Running kube-dns preflight check
13:52:20 STEP: Performing K8s service preflight check
13:52:23 STEP: Making L7 requests between endpoints
13:52:25 STEP: No interrupts in migrated svc flows
13:52:25 STEP: Install Cilium pre-flight check DaemonSet
13:52:30 STEP: Waiting for all cilium pre-flight pods to be ready
13:52:30 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-pre-flight-check")
13:52:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-pre-flight-check") => <nil>
13:52:40 STEP: Removing Cilium pre-flight check DaemonSet
13:52:42 STEP: Waiting for Cilium to become ready
13:52:43 STEP: Upgrading Cilium to 1.9.90
13:52:49 STEP: Validating pods have the right image version upgraded
FAIL: Pods are not updating
Expected
<*errors.errorString | 0xc00121b9d0>: {
s: "Cilium Pods are not updating correctly to afed209f6c31e850389a5ac7f1246855614ad408: 5m0s timeout expired",
}
to be nil
=== Test Finished at 2021-05-05T13:57:49Z====
13:57:49 STEP: Running JustAfterEach block for EntireTestsuite K8sUpdates
===================== TEST FAILED =====================
13:57:50 STEP: Running AfterFailed block for EntireTestsuite K8sUpdates
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
202105051238k8sclicliidentityclitestingtestciliumbpfmetricslist app1-5798c5fb6b-frb9h 2/2 Running 0 79m 10.96.2.157 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
202105051238k8sclicliidentityclitestingtestciliumbpfmetricslist app1-5798c5fb6b-zb965 2/2 Running 0 79m 10.96.2.206 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
202105051238k8sclicliidentityclitestingtestciliumbpfmetricslist app2-5cc5d58844-l7mqk 1/1 Running 0 79m 10.96.2.71 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
202105051238k8sclicliidentityclitestingtestciliumbpfmetricslist app3-6c7856c5b5-n2g5z 1/1 Running 0 79m 10.96.2.35 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
cilium-monitoring grafana-7fd557d749-ptv2j 1/1 Running 1 83m 10.96.1.73 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
cilium-monitoring prometheus-d87f8f984-7fz5j 1/1 Running 1 83m 10.96.2.203 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
default app1-7b6ddb776f-4z69b 2/2 Running 0 6m33s 10.96.2.21 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
default app1-7b6ddb776f-xvtjq 2/2 Running 0 6m33s 10.96.2.109 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
default app2-5cc5d58844-jsjnf 1/1 Running 0 6m33s 10.96.2.152 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
default app3-6c7856c5b5-w5h9d 1/1 Running 0 6m33s 10.96.2.159 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
default migrate-svc-client-2jkmp 1/1 Running 0 5m50s 10.96.1.140 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
default migrate-svc-client-9c9wn 1/1 Running 0 5m50s 10.96.1.156 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
default migrate-svc-client-fxscs 1/1 Running 0 5m50s 10.96.1.214 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
default migrate-svc-client-j4b8x 1/1 Running 0 5m50s 10.96.1.110 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
default migrate-svc-client-wkrvh 1/1 Running 0 5m50s 10.96.2.94 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
default migrate-svc-server-jrgjc 1/1 Running 0 6m1s 10.96.1.15 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
default migrate-svc-server-lsfqd 1/1 Running 0 6m1s 10.96.1.41 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
default migrate-svc-server-zvpw2 1/1 Running 0 6m1s 10.96.1.108 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system cilium-dxp2s 1/1 Running 0 4m58s 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system cilium-lgvn6 1/1 Running 0 5m3s 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system cilium-node-init-7ntzc 1/1 Running 0 5m3s 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system cilium-node-init-nnwkm 1/1 Running 0 4m56s 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system cilium-operator-798bf55f9-k52f5 1/1 Running 0 5m5s 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system cilium-operator-798bf55f9-sflc4 1/1 Running 0 5m5s 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system event-exporter-gke-666b7ffbf7-snqpm 2/2 Running 0 82m 10.96.2.181 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system fluentbit-gke-2jq7d 2/2 Running 0 85m 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system fluentbit-gke-sbb87 2/2 Running 0 85m 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system gke-metrics-agent-ccnk9 1/1 Running 0 85m 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system gke-metrics-agent-q2f5t 1/1 Running 0 85m 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system kube-dns-autoscaler-5c78d65cd9-d4lmb 1/1 Running 0 82m 10.96.2.59 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system kube-dns-c9488f9fb-fff6v 4/4 Running 0 7m15s 10.96.2.46 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system kube-dns-c9488f9fb-mrh8w 4/4 Running 0 7m15s 10.96.1.42 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system kube-proxy-gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth 1/1 Running 0 85m 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system kube-proxy-gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f 1/1 Running 0 85m 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system l7-default-backend-5b76b455d-28k72 1/1 Running 0 82m 10.96.2.177 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system log-gatherer-7ww44 1/1 Running 0 84m 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system log-gatherer-r9n47 1/1 Running 0 84m 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system metrics-server-v0.3.6-547dc87f5f-56ts6 2/2 Running 0 82m 10.96.2.79 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system prometheus-to-sd-47hsl 1/1 Running 0 85m 10.128.0.54 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-l32f <none> <none>
kube-system prometheus-to-sd-nmqwc 1/1 Running 0 85m 10.128.0.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
kube-system stackdriver-metadata-agent-cluster-level-7d99d4c768-8jpkz 2/2 Running 0 82m 10.96.2.55 gke-cilium-ci-3-cilium-ci-3-7c6ea84e-hdth <none> <none>
Stderr:
Fetching command output from pods [cilium-dxp2s cilium-lgvn6]
cmd: kubectl exec -n kube-system cilium-dxp2s -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
108 Disabled Disabled 13986 k8s:app=migrate-svc-client 10.96.2.94 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
113 Disabled Disabled 28916 k8s:id=app3 10.96.2.159 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1106 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:cloud.google.com/gke-nodepool=cilium-ci-3
k8s:cloud.google.com/gke-os-distribution=cos
k8s:cloud.google.com/machine-family=n1
k8s:node.kubernetes.io/instance-type=n1-standard-4
k8s:topology.kubernetes.io/region=us-west1
k8s:topology.kubernetes.io/zone=us-west1-a
reserved:host
1188 Disabled Disabled 4 reserved:health 10.96.2.242 ready
2393 Enabled Disabled 34501 k8s:id=app1 10.96.2.109 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
2644 Disabled Disabled 962 k8s:io.cilium.k8s.policy.cluster=default 10.96.2.46 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
3664 Enabled Disabled 34501 k8s:id=app1 10.96.2.21 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
3851 Disabled Disabled 39214 k8s:appSecond=true 10.96.2.152 ready
k8s:id=app2
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
Stderr:
cmd: kubectl exec -n kube-system cilium-lgvn6 -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
264 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
k8s:cloud.google.com/gke-nodepool=cilium-ci-3
k8s:cloud.google.com/gke-os-distribution=cos
k8s:cloud.google.com/machine-family=n1
k8s:node.kubernetes.io/instance-type=n1-standard-4
k8s:topology.kubernetes.io/region=us-west1
k8s:topology.kubernetes.io/zone=us-west1-a
reserved:host
420 Disabled Disabled 13986 k8s:app=migrate-svc-client 10.96.1.156 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
466 Disabled Disabled 13986 k8s:app=migrate-svc-client 10.96.1.214 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
588 Disabled Disabled 4 reserved:health 10.96.1.103 ready
649 Disabled Disabled 962 k8s:io.cilium.k8s.policy.cluster=default 10.96.1.42 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1658 Disabled Disabled 11619 k8s:app=migrate-svc-server 10.96.1.41 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3095 Disabled Disabled 13986 k8s:app=migrate-svc-client 10.96.1.110 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3300 Disabled Disabled 13986 k8s:app=migrate-svc-client 10.96.1.140 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3827 Disabled Disabled 11619 k8s:app=migrate-svc-server 10.96.1.15 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3938 Disabled Disabled 11619 k8s:app=migrate-svc-server 10.96.1.108 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
Stderr:
===================== Exiting AfterFailed =====================
13:58:31 STEP: Running AfterEach for block EntireTestsuite K8sUpdates
13:59:17 STEP: Cleaning up Cilium components
13:59:35 STEP: Waiting for Cilium to become ready
13:59:55 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|26ac0e53_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip]]
13:59:57 STEP: Running AfterAll block for EntireTestsuite K8sUpdates
13:59:57 STEP: Cleaning up Cilium components
The text was updated successfully, but these errors were encountered:
nbusseneau
added
area/CI
Continuous Integration testing issue or flake
ci/flake
This is a known failure that occurs in the tree. Please investigate me!
labels
May 6, 2021
That one is weird... So the test asks Helm to upgrade to Cilium version afed209, from CiliumTag (cmd line parameter with value matching the expected value in logs), and therefore expects the agent images to match that after the upgrade. Instead, it finds the agents running with Cilium version efc15cd.
It's not even that the upgrade didn't happen; it happened but to the wrong images:
$ docker run quay.io/cilium/cilium-ci@sha256:bdec5db5b9651c208a326f8d3b1d6a1caf5d943989ea2fdb68b24802dd17b134 cilium version
Client: 1.10.0-rc1 efc15cd 2021-04-27T17:45:59-07:00 go version go1.16.3 linux/amd64
Daemon: Not responding
$ docker run quay.io/cilium/cilium-ci:afed209f6c31e850389a5ac7f1246855614ad408 cilium version
Client: 1.10.0-rc1 afed209 2021-05-04T18:23:58+02:00 go version go1.16.3 linux/amd64
Daemon: Not responding
I don't understand where that efc15cd tag is coming from 🤔 It does correspond exactly to tag v1.10.0-rc1.
Could this somehow be related to this error in the v1.10 branch?
Do we need #15947 backported for this to work (basically skip the image digests in docker image listings in CI because the ones in the tree correspond to the most recent release)?
nbusseneau
changed the title
CI: v1.10/GKE K8sUpdates Tests upgrade and downgrade from a Cilium stable image to master
CI: v1.10 K8sUpdates Tests upgrade and downgrade from a Cilium stable image to master
May 6, 2021
The text was updated successfully, but these errors were encountered: