Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Suite-k8s-1.20.K8sServicesTest Checks service across nodes Tests NodePort (kube-proxy) with the host firewall and externalTrafficPolicy=Local: Exit status 42 #15103

Closed
joestringer opened this issue Feb 24, 2021 · 2 comments
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects

Comments

@joestringer
Copy link
Member

K8sServicesTest Checks service across nodes Tests NodePort (kube-proxy) with the host firewall and externalTrafficPolicy=Local

Kubernetes 1.20
Cilium v1.10 dev cycle
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/717/testReport/junit/Suite-k8s-1/20/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort__kube_proxy__with_the_host_firewall_and_externalTrafficPolicy_Local/

Artifacts are too big to attach.

Failed on #14905 which only changes some logging in the operator + aws-specific codepaths which are not tested by this job.

Possibly related to #13839, #13011, #12690.

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:514
Request from k8s1 to service http://192.168.36.12:31372 failed
Expected command: kubectl exec -n kube-system log-gatherer-lprgb -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.12:31372 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 failed: :5674/1=28:5674/2=28:5674/3=28:5674/4=28:5674/5=28:5674/6=28:5674/7=28:5674/8=28:5674/9=28:5674/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8sT/Services.go:1583

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-562xz cilium-r6hpl]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-655fb888d7-l8b9k             
test-k8s2-79ff876c9d-r87fl              
testclient-hpcsv                        
testclient-l9vnl                        
testds-g257g                            
testds-gqp2x                            
coredns-867bf6789f-4sfwk                
grafana-d69c97b9b-w9l5j                 
Cilium agent 'cilium-562xz': Status: Ok  Health: Ok Nodes "" Kubernetes: Ok KVstore: Ok Controllers: Total 31 Failed 0
Cilium agent 'cilium-r6hpl': Status: Ok  Health: Ok Nodes "" Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0

Standard Error

12:34:13 STEP: Installing Cilium
12:34:14 STEP: Waiting for Cilium to become ready
12:34:49 STEP: Validating if Kubernetes DNS is deployed
12:34:49 STEP: Checking if deployment is ready
12:34:49 STEP: Checking if kube-dns service is plumbed correctly
12:34:49 STEP: Checking if DNS can resolve
12:34:49 STEP: Checking if pods have identity
12:34:55 STEP: Kubernetes DNS is not ready: 5s timeout expired
12:34:55 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
12:35:01 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-r6hpl: unable to find service backend 10.0.0.178:53 in datapath of cilium pod cilium-r6hpl
12:35:11 STEP: Waiting for Kubernetes DNS to become operational
12:35:11 STEP: Checking if deployment is ready
12:35:11 STEP: Checking if kube-dns service is plumbed correctly
12:35:11 STEP: Checking if pods have identity
12:35:11 STEP: Checking if DNS can resolve
12:35:12 STEP: Validating Cilium Installation
12:35:12 STEP: Performing Cilium controllers preflight check
12:35:12 STEP: Performing Cilium health check
12:35:12 STEP: Performing Cilium status preflight check
12:35:13 STEP: Performing Cilium service preflight check
12:35:13 STEP: Performing K8s service preflight check
12:35:13 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-r6hpl': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

12:35:17 STEP: Performing Cilium status preflight check
12:35:17 STEP: Performing Cilium health check
12:35:17 STEP: Performing Cilium controllers preflight check
12:35:18 STEP: Performing Cilium service preflight check
12:35:18 STEP: Performing K8s service preflight check
12:35:22 STEP: Performing Cilium controllers preflight check
12:35:22 STEP: Performing Cilium status preflight check
12:35:22 STEP: Performing Cilium health check
12:35:23 STEP: Performing Cilium service preflight check
12:35:23 STEP: Performing K8s service preflight check
12:35:27 STEP: Performing Cilium controllers preflight check
12:35:27 STEP: Performing Cilium health check
12:35:27 STEP: Performing Cilium status preflight check
12:35:28 STEP: Performing Cilium service preflight check
12:35:28 STEP: Performing K8s service preflight check
12:35:32 STEP: Performing Cilium status preflight check
12:35:32 STEP: Performing Cilium health check
12:35:32 STEP: Performing Cilium controllers preflight check
12:35:33 STEP: Performing Cilium service preflight check
12:35:33 STEP: Performing K8s service preflight check
12:35:37 STEP: Performing Cilium health check
12:35:37 STEP: Performing Cilium controllers preflight check
12:35:37 STEP: Performing Cilium status preflight check
12:35:38 STEP: Performing Cilium service preflight check
12:35:38 STEP: Performing K8s service preflight check
12:35:42 STEP: Performing Cilium controllers preflight check
12:35:42 STEP: Performing Cilium health check
12:35:42 STEP: Performing Cilium status preflight check
12:35:43 STEP: Performing Cilium service preflight check
12:35:43 STEP: Performing K8s service preflight check
12:35:44 STEP: Waiting for cilium-operator to be ready
12:35:44 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
12:35:44 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
Skipping externalTrafficPolicy=Local test from external node
12:35:44 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.36.12:31372"
FAIL: Request from k8s1 to service http://192.168.36.12:31372 failed
Expected command: kubectl exec -n kube-system log-gatherer-lprgb -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.12:31372 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 failed: :5674/1=28:5674/2=28:5674/3=28:5674/4=28:5674/5=28:5674/6=28:5674/7=28:5674/8=28:5674/9=28:5674/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

=== Test Finished at 2021-02-18T12:36:35Z====
12:36:35 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
12:36:35 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-d69c97b9b-w9l5j            1/1     Running   0          94m     10.0.0.84       k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-l8b9k        1/1     Running   0          94m     10.0.0.240      k8s2   <none>           <none>
	 default             test-k8s2-79ff876c9d-r87fl         2/2     Running   0          5m45s   10.0.0.136      k8s2   <none>           <none>
	 default             testclient-hpcsv                   1/1     Running   0          5m45s   10.0.0.49       k8s2   <none>           <none>
	 default             testclient-l9vnl                   1/1     Running   0          5m45s   10.0.1.104      k8s1   <none>           <none>
	 default             testds-g257g                       2/2     Running   0          5m45s   10.0.1.23       k8s1   <none>           <none>
	 default             testds-gqp2x                       2/2     Running   0          5m45s   10.0.0.110      k8s2   <none>           <none>
	 kube-system         cilium-562xz                       1/1     Running   0          2m23s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-54b846fcd8-5rtwf   1/1     Running   0          2m23s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-54b846fcd8-xg9ws   1/1     Running   0          2m23s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-r6hpl                       1/1     Running   0          2m23s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         coredns-867bf6789f-4sfwk           1/1     Running   0          102s    10.0.1.246      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          98m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          98m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          98m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-2km6x                   1/1     Running   0          97m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-8chtf                   1/1     Running   0          96m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          98m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-ftcn6                 1/1     Running   0          94m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-lprgb                 1/1     Running   0          94m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         registry-adder-phtx5               1/1     Running   0          96m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-r4fjm               1/1     Running   0          96m     192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-562xz cilium-r6hpl]
cmd: kubectl exec -n kube-system cilium-562xz -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                   
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:9153        ClusterIP      1 => 10.0.1.246:9153      
	 3    10.96.0.10:53          ClusterIP      1 => 10.0.1.246:53        
	 4    10.99.197.46:3000      ClusterIP      1 => 10.0.0.84:3000       
	 5    10.105.70.135:9090     ClusterIP      1 => 10.0.0.240:9090      
	 6    10.96.62.248:80        ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 7    10.96.62.248:69        ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 8    10.108.101.23:10080    ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 9    10.108.101.23:10069    ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 10   10.106.54.89:10080     ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 11   10.106.54.89:10069     ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 12   10.98.0.186:10080      ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 13   10.98.0.186:10069      ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 14   10.107.24.13:10080     ClusterIP      1 => 10.0.0.136:80        
	 15   10.107.24.13:10069     ClusterIP      1 => 10.0.0.136:69        
	 16   10.105.86.106:10069    ClusterIP      1 => 10.0.0.136:69        
	 17   10.105.86.106:10080    ClusterIP      1 => 10.0.0.136:80        
	 18   10.99.32.140:80        ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 19   10.106.233.178:80      ClusterIP      1 => 10.0.0.136:80        
	 20   10.109.140.189:20069   ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 21   10.109.140.189:20080   ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 22   [fd03::268d]:80        ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 23   [fd03::268d]:69        ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 24   [fd03::1b4]:10080      ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 25   [fd03::1b4]:10069      ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 26   [fd03::634]:10080      ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 27   [fd03::634]:10069      ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 28   [fd03::55b2]:10080     ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 29   [fd03::55b2]:10069     ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 30   [fd03::e739]:10080     ClusterIP      1 => [fd02::a5]:80        
	 31   [fd03::e739]:10069     ClusterIP      1 => [fd02::a5]:69        
	 32   [fd03::33a8]:10080     ClusterIP      1 => [fd02::a5]:80        
	 33   [fd03::33a8]:10069     ClusterIP      1 => [fd02::a5]:69        
	 34   [fd03::2c54]:20080     ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 35   [fd03::2c54]:20069     ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-562xz -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                             
	 216        Disabled           Disabled          4          reserved:health                                   fd02::1bc   10.0.1.228   ready   
	 699        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                          
	                                                            k8s:node-role.kubernetes.io/master                                                 
	                                                            reserved:host                                                                      
	 1006       Disabled           Disabled          55797      k8s:io.cilium.k8s.policy.cluster=default          fd02::1df   10.0.1.246   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                        
	                                                            k8s:k8s-app=kube-dns                                                               
	 1603       Disabled           Disabled          3510       k8s:io.cilium.k8s.policy.cluster=default          fd02::138   10.0.1.23    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                            
	                                                            k8s:zgroup=testDS                                                                  
	 2653       Disabled           Disabled          24732      k8s:io.cilium.k8s.policy.cluster=default          fd02::110   10.0.1.104   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                            
	                                                            k8s:zgroup=testDSClient                                                            
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r6hpl -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                   
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:53          ClusterIP      1 => 10.0.1.246:53        
	 3    10.96.0.10:9153        ClusterIP      1 => 10.0.1.246:9153      
	 4    10.99.197.46:3000      ClusterIP      1 => 10.0.0.84:3000       
	 5    10.105.70.135:9090     ClusterIP      1 => 10.0.0.240:9090      
	 6    10.96.62.248:80        ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 7    10.96.62.248:69        ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 8    10.108.101.23:10080    ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 9    10.108.101.23:10069    ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 10   10.106.54.89:10069     ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 11   10.106.54.89:10080     ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 12   10.98.0.186:10080      ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 13   10.98.0.186:10069      ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 14   10.107.24.13:10080     ClusterIP      1 => 10.0.0.136:80        
	 15   10.107.24.13:10069     ClusterIP      1 => 10.0.0.136:69        
	 16   10.105.86.106:10080    ClusterIP      1 => 10.0.0.136:80        
	 17   10.105.86.106:10069    ClusterIP      1 => 10.0.0.136:69        
	 18   10.99.32.140:80        ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 19   10.106.233.178:80      ClusterIP      1 => 10.0.0.136:80        
	 20   10.109.140.189:20080   ClusterIP      1 => 10.0.1.23:80         
	                                            2 => 10.0.0.110:80        
	 21   10.109.140.189:20069   ClusterIP      1 => 10.0.1.23:69         
	                                            2 => 10.0.0.110:69        
	 22   [fd03::268d]:80        ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 23   [fd03::268d]:69        ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 24   [fd03::1b4]:10080      ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 25   [fd03::1b4]:10069      ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 26   [fd03::634]:10080      ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 27   [fd03::634]:10069      ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 28   [fd03::55b2]:10080     ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 29   [fd03::55b2]:10069     ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 30   [fd03::e739]:10080     ClusterIP      1 => [fd02::a5]:80        
	 31   [fd03::e739]:10069     ClusterIP      1 => [fd02::a5]:69        
	 32   [fd03::33a8]:10080     ClusterIP      1 => [fd02::a5]:80        
	 33   [fd03::33a8]:10069     ClusterIP      1 => [fd02::a5]:69        
	 34   [fd03::2c54]:20069     ClusterIP      1 => [fd02::138]:69       
	                                            2 => [fd02::fe]:69        
	 35   [fd03::2c54]:20080     ClusterIP      1 => [fd02::138]:80       
	                                            2 => [fd02::fe]:80        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r6hpl -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                              IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                   
	 152        Disabled           Disabled          3510       k8s:io.cilium.k8s.policy.cluster=default                 fd02::fe   10.0.0.110   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                  
	                                                            k8s:zgroup=testDS                                                                        
	 280        Disabled           Disabled          10322      k8s:app=prometheus                                       fd02::64   10.0.0.240   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                        
	 619        Disabled           Disabled          38481      k8s:io.cilium.k8s.policy.cluster=default                 fd02::a5   10.0.0.136   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                  
	                                                            k8s:zgroup=test-k8s2                                                                     
	 1195       Disabled           Disabled          4          reserved:health                                          fd02::ac   10.0.0.150   ready   
	 2350       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                       ready   
	                                                            reserved:host                                                                            
	 2547       Disabled           Disabled          24732      k8s:io.cilium.k8s.policy.cluster=default                 fd02::8e   10.0.0.49    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                  
	                                                            k8s:zgroup=testDSClient                                                                  
	 2651       Disabled           Disabled          35586      k8s:app=grafana                                          fd02::df   10.0.0.84    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                          
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                        
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
12:36:48 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
12:36:48 STEP: Running AfterEach for block EntireTestsuite
@joestringer joestringer added area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! labels Feb 24, 2021
gandro added a commit to gandro/cilium that referenced this issue May 31, 2021
This increases the curl connection timeout from 5 to 15 seconds to avoid
issues with IPCache propagation delay. On Cilium master an 1.10, it
seems that IPCache updates in CI can take up to 4-8 seconds.

CI flakes likely caused by the increased IPCache propagation delay:

 - cilium#13839
 - cilium#14959
 - cilium#15103
 - cilium#16237

Signed-off-by: Sebastian Wicki <sebastian@isovalent.com>
@stale
Copy link

stale bot commented Jun 4, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Jun 4, 2021
@stale
Copy link

stale bot commented Jun 26, 2021

This issue has not seen any activity since it was marked stale. Closing.

@stale stale bot closed this as completed Jun 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
No open projects
CI Force
  
Awaiting triage
Development

No branches or pull requests

1 participant