You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
DNS entry is not ready after timeout
Expected
<*json.SyntaxError | 0xc000013c08>: {
msg: "unexpected end of JSON input",
Offset: 0,
}
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/istio.go:356
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 1
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Network status error received, restarting client connections
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 9
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Key allocation attempt failed
Policy map sync fixed errors, consider running with debug verbose = policy to get detailed dumps
Unable to get node resource
Waiting for k8s node information
Unable to update CiliumNode resource, will retry
Cilium pods: [cilium-4hn48 cilium-s2qkz]
Netpols loaded:
CiliumNetworkPolicies loaded: default::cnp-specs
Endpoint Policy Enforcement:
Pod Ingress Egress
coredns-7c74c644b-7h8nx false false
grafana-d69c97b9b-f9rpv false false
productpage-v1-66b44dc6-2ngmw false false
ratings-v1-6c5dd8d8f9-vpqkm false false
istio-ingressgateway-688549f8d6-jr7vq false false
istiod-5f9fcf7986-6sftg false false
prometheus-655fb888d7-gg6g5 false false
details-v1-7979fc5975-w65lp false false
reviews-v1-858cfcc657-9ltwd false false
reviews-v2-57fcb764c-jrr4z false false
Cilium agent 'cilium-4hn48': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 51 Failed 0
Cilium agent 'cilium-s2qkz': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0
Standard Error
Click to show.
08:04:53 STEP: Running BeforeAll block for EntireTestsuite K8sIstioTest
08:04:53 STEP: Ensuring the namespace kube-system exists
08:04:53 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
08:04:53 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
08:04:53 STEP: Downloading cilium-istioctl
08:04:54 STEP: Installing Cilium
08:04:55 STEP: Waiting for Cilium to become ready
08:06:28 STEP: Restarting unmanaged pods coredns-7c74c644b-9spkk in namespace kube-system
08:06:35 STEP: Validating if Kubernetes DNS is deployed
08:06:35 STEP: Checking if deployment is ready
08:06:35 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
08:06:35 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
08:06:35 STEP: Waiting for Kubernetes DNS to become operational
08:06:35 STEP: Checking if deployment is ready
08:06:35 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:06:36 STEP: Checking if deployment is ready
08:06:36 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:06:37 STEP: Checking if deployment is ready
08:06:37 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:06:38 STEP: Checking if deployment is ready
08:06:38 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:06:39 STEP: Checking if deployment is ready
08:06:39 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:06:40 STEP: Checking if deployment is ready
08:06:40 STEP: Checking if kube-dns service is plumbed correctly
08:06:40 STEP: Checking if pods have identity
08:06:40 STEP: Checking if DNS can resolve
08:06:44 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:06:44 STEP: Checking if deployment is ready
08:06:44 STEP: Checking if kube-dns service is plumbed correctly
08:06:44 STEP: Checking if pods have identity
08:06:44 STEP: Checking if DNS can resolve
08:06:47 STEP: Validating Cilium Installation
08:06:47 STEP: Performing Cilium controllers preflight check
08:06:47 STEP: Performing Cilium status preflight check
08:06:47 STEP: Performing Cilium health check
08:06:47 STEP: Checking whether host EP regenerated
08:06:55 STEP: Performing Cilium service preflight check
08:06:55 STEP: Performing K8s service preflight check
08:07:01 STEP: Waiting for cilium-operator to be ready
08:07:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
08:07:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
08:07:01 STEP: Labeling default namespace for sidecar injection
08:07:01 STEP: Setting label istio-injection=enabled in namespace default
08:07:01 STEP: Deploying Istio
08:07:21 STEP: Waiting for Istio pods to be ready
08:07:21 STEP: WaitforNPodsRunning(namespace="istio-system", filter="-l istio")
08:07:21 STEP: WaitforNPods(namespace="istio-system", filter="-l istio") => <nil>
08:07:21 STEP: WaitforPods(namespace="istio-system", filter="-l istio")
08:07:21 STEP: WaitforPods(namespace="istio-system", filter="-l istio") => <nil>
08:07:21 STEP: Waiting for Istio service "istio-ingressgateway" to be ready
08:07:22 STEP: Waiting for Istio service "istiod" to be ready
08:07:22 STEP: Waiting for DNS to resolve Istio service "istio-ingressgateway"
08:07:22 STEP: Waiting for DNS to resolve Istio service "istiod"
08:07:22 STEP: Creating policy in file "/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/cnp-specs.yaml"
08:07:29 STEP: Creating resources in file "/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/bookinfo-v2.yaml"
08:07:29 STEP: Creating resources in file "/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/bookinfo-v1.yaml"
08:07:29 STEP: Waiting for Bookinfo pods to be ready
08:07:29 STEP: WaitforPods(namespace="default", filter="-l zgroup=bookinfo")
08:08:12 STEP: WaitforPods(namespace="default", filter="-l zgroup=bookinfo") => <nil>
08:08:12 STEP: Waiting for Bookinfo endpoints to be ready
08:08:14 STEP: Waiting for Bookinfo service "details" to be ready
08:08:14 STEP: Waiting for Bookinfo service "ratings" to be ready
08:08:14 STEP: Waiting for Bookinfo service "reviews" to be ready
08:08:14 STEP: Waiting for Bookinfo service "productpage" to be ready
08:08:14 STEP: Waiting for DNS to resolve Bookinfo service "productpage"
08:08:15 STEP: Waiting for DNS to resolve Bookinfo service "reviews"
FAIL: DNS entry is not ready after timeout
Expected
<*json.SyntaxError | 0xc000013c08>: {
msg: "unexpected end of JSON input",
Offset: 0,
}
to be nil
=== Test Finished at 2023-06-06T08:08:22Z====
08:08:22 STEP: Running JustAfterEach block for EntireTestsuite K8sIstioTest
===================== TEST FAILED =====================
08:08:26 STEP: Running AfterFailed block for EntireTestsuite K8sIstioTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-d69c97b9b-f9rpv 1/1 Running 0 3m39s 10.0.0.25 k8s2 <none> <none>
cilium-monitoring prometheus-655fb888d7-gg6g5 1/1 Running 0 3m39s 10.0.0.121 k8s2 <none> <none>
default details-v1-7979fc5975-w65lp 2/2 Running 0 63s 10.0.1.243 k8s1 <none> <none>
default productpage-v1-66b44dc6-2ngmw 2/2 Running 0 62s 10.0.0.85 k8s2 <none> <none>
default ratings-v1-6c5dd8d8f9-vpqkm 2/2 Running 0 63s 10.0.1.205 k8s1 <none> <none>
default reviews-v1-858cfcc657-9ltwd 2/2 Running 0 63s 10.0.0.252 k8s2 <none> <none>
default reviews-v2-57fcb764c-jrr4z 2/2 Running 0 63s 10.0.0.235 k8s2 <none> <none>
istio-system istio-ingressgateway-688549f8d6-jr7vq 1/1 Running 0 79s 10.0.0.229 k8s2 <none> <none>
istio-system istiod-5f9fcf7986-6sftg 1/1 Running 0 89s 10.0.0.155 k8s2 <none> <none>
kube-system cilium-4hn48 1/1 Running 0 3m37s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-c4d4db79b-gwxxv 1/1 Running 1 3m37s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-c4d4db79b-mgjtt 1/1 Running 0 3m37s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-s2qkz 1/1 Running 0 3m37s 192.168.56.11 k8s1 <none> <none>
kube-system coredns-7c74c644b-7h8nx 1/1 Running 0 117s 10.0.0.227 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 7m57s 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 7m57s 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 0/1 CrashLoopBackOff 2 7m57s 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-2hlll 1/1 Running 0 4m26s 192.168.56.12 k8s2 <none> <none>
kube-system kube-proxy-n5hw4 1/1 Running 0 7m37s 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 2 7m57s 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-8fwsq 1/1 Running 0 3m44s 192.168.56.12 k8s2 <none> <none>
kube-system log-gatherer-b7d2f 1/1 Running 0 3m44s 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-cgwjn 1/1 Running 0 4m24s 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-n9jpf 1/1 Running 0 4m24s 192.168.56.11 k8s1 <none> <none>
Stderr:
Fetching command output from pods [cilium-4hn48 cilium-s2qkz]
cmd: kubectl exec -n kube-system cilium-4hn48 -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
89 Disabled Disabled 6461 k8s:app=reviews fd02::5d 10.0.0.235 ready
k8s:io.cilium.k8s.namespace.labels.istio-injection=enabled
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.istiosidecarproxy=true
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:istio.io/rev=default
k8s:security.istio.io/tlsMode=istio
k8s:service.istio.io/canonical-name=reviews
k8s:service.istio.io/canonical-revision=v2
k8s:version=v2
k8s:zgroup=bookinfo
361 Disabled Disabled 56021 k8s:app=reviews fd02::96 10.0.0.252 ready
k8s:io.cilium.k8s.namespace.labels.istio-injection=enabled
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.istiosidecarproxy=true
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:istio.io/rev=default
k8s:security.istio.io/tlsMode=istio
k8s:service.istio.io/canonical-name=reviews
k8s:service.istio.io/canonical-revision=v1
k8s:track=stable
k8s:version=v1
k8s:zgroup=bookinfo
404 Disabled Disabled 16432 k8s:io.cilium.k8s.policy.cluster=default fd02::98 10.0.0.227 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
424 Disabled Disabled 30670 k8s:app=grafana fd02::8d 10.0.0.25 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
897 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
1362 Disabled Disabled 38298 k8s:app=istiod fd02::33 10.0.0.155 ready
k8s:install.operator.istio.io/owning-resource=unknown
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=istiod-service-account
k8s:io.kubernetes.pod.namespace=istio-system
k8s:istio.io/rev=default
k8s:istio=pilot
k8s:operator.istio.io/component=Pilot
k8s:sidecar.istio.io/inject=false
1373 Disabled Disabled 4 reserved:health fd02::27 10.0.0.178 ready
1923 Disabled Disabled 27707 k8s:app=prometheus fd02::9a 10.0.0.121 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
2947 Disabled Disabled 30809 k8s:app=productpage fd02::70 10.0.0.85 ready
k8s:io.cilium.k8s.namespace.labels.istio-injection=enabled
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.istiosidecarproxy=true
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:istio.io/rev=default
k8s:security.istio.io/tlsMode=istio
k8s:service.istio.io/canonical-name=productpage
k8s:service.istio.io/canonical-revision=v1
k8s:track=stable
k8s:version=v1
k8s:zgroup=bookinfo
3559 Disabled Disabled 23236 k8s:app=istio-ingressgateway fd02::92 10.0.0.229 ready
k8s:chart=gateways
k8s:heritage=Tiller
k8s:install.operator.istio.io/owning-resource=unknown
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=istio-ingressgateway-service-account
k8s:io.kubernetes.pod.namespace=istio-system
k8s:istio.io/rev=default
k8s:istio=ingressgateway
k8s:operator.istio.io/component=IngressGateways
k8s:release=istio
k8s:service.istio.io/canonical-name=istio-ingressgateway
k8s:service.istio.io/canonical-revision=latest
k8s:sidecar.istio.io/inject=false
Stderr:
cmd: kubectl exec -n kube-system cilium-s2qkz -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
229 Enabled Disabled 27822 k8s:app=details fd02::158 10.0.1.243 ready
k8s:io.cilium.k8s.namespace.labels.istio-injection=enabled
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.istiosidecarproxy=true
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:istio.io/rev=default
k8s:security.istio.io/tlsMode=istio
k8s:service.istio.io/canonical-name=details
k8s:service.istio.io/canonical-revision=v1
k8s:track=stable
k8s:version=v1
k8s:zgroup=bookinfo
1524 Enabled Disabled 34542 k8s:app=ratings fd02::13d 10.0.1.205 ready
k8s:io.cilium.k8s.namespace.labels.istio-injection=enabled
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.istiosidecarproxy=true
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:istio.io/rev=default
k8s:security.istio.io/tlsMode=istio
k8s:service.istio.io/canonical-name=ratings
k8s:service.istio.io/canonical-revision=v1
k8s:version=v1
k8s:zgroup=bookinfo
1922 Disabled Disabled 4 reserved:health fd02::138 10.0.1.227 ready
2546 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/control-plane
k8s:node-role.kubernetes.io/master
reserved:host
Stderr:
===================== Exiting AfterFailed =====================
08:09:06 STEP: Running AfterEach for block EntireTestsuite K8sIstioTest Istio Bookinfo Demo
08:09:06 STEP: Deleting resource in file "/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/bookinfo-v2.yaml"
08:09:06 STEP: Deleting resource in file "/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/bookinfo-v1.yaml"
08:09:06 STEP: Deleting policy in file "/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/cnp-specs.yaml"
08:09:06 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|9d6e5f0f_K8sIstioTest_Istio_Bookinfo_Demo_Tests_bookinfo_inter-service_connectivity.zip]]
08:09:08 STEP: Running AfterAll block for EntireTestsuite K8sIstioTest
08:09:08 STEP: Deleting default namespace sidecar injection label
08:09:08 STEP: Setting label istio-injection- in namespace default
08:09:08 STEP: Deleting the Istio resources
08:09:09 STEP: Waiting all terminating PODs to disappear
08:09:09 STEP: Deleting the istio-system namespace
08:09:09 STEP: Deleting namespace istio-system
08:09:29 STEP: Removing Cilium installation using generated helm manifest
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Test Name
Failure Output
Stacktrace
Click to show.
Standard Output
Click to show.
Standard Error
Click to show.
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//24/artifact/9d6e5f0f_K8sIstioTest_Istio_Bookinfo_Demo_Tests_bookinfo_inter-service_connectivity.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//24/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//24/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.9_24_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/24/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
The text was updated successfully, but these errors were encountered: