Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) Tests NodePort inside cluster (kube-proxy) with IPSec and externalTrafficPolicy=Local #24602

Closed
maintainer-s-little-helper bot opened this issue Mar 28, 2023 · 2 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) Tests NodePort inside cluster (kube-proxy) with IPSec and externalTrafficPolicy=Local

Failure Output

FAIL: Request from k8s1 to service http://[fd04::11]:30654 failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Request from k8s1 to service http://[fd04::11]:30654 failed
Expected command: kubectl exec -n kube-system log-gatherer-6vhm4 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:30654 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 failed: :30641/1=28:30641/2=28:30641/3=28:30641/4=28:30641/5=28:30641/6=28:30641/7=28:30641/8=28:30641/9=28:30641/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/k8s/service_helpers.go:863

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
UpdateIdentities: Skipping Delete of a non-existing identity
Attempt to remove non-existing IP from ipcache layer
Cilium pods: [cilium-g4frx cilium-hx5ff]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::hairpin-validation-policy 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
testds-2wzkv                 false     false
testds-jpkk2                 false     false
app1-754f779d8c-9nvjk        false     false
app3-57f78c8bdf-czmn7        false     false
testclient-cbwlp             false     false
testclient-crz5w             false     false
test-k8s2-56f67cd755-gnjvh   false     false
coredns-567b6dd84-ztwnf      false     false
app1-754f779d8c-ctrl7        false     false
app2-86858f4bd6-p5bkj        false     false
echo-9674cb9d4-6grtn         false     false
echo-9674cb9d4-sz2xd         false     false
Cilium agent 'cilium-g4frx': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0
Cilium agent 'cilium-hx5ff': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 55 Failed 0


Standard Error

Click to show.
09:47:11 STEP: Deploying ipsec_secret.yaml in namespace kube-system
09:47:11 STEP: Installing Cilium
09:47:14 STEP: Waiting for Cilium to become ready
09:47:40 STEP: Validating if Kubernetes DNS is deployed
09:47:40 STEP: Checking if deployment is ready
09:47:40 STEP: Checking if kube-dns service is plumbed correctly
09:47:40 STEP: Checking if DNS can resolve
09:47:40 STEP: Checking if pods have identity
09:47:42 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:43 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:45 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:47 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:50 STEP: Kubernetes DNS is up and operational
09:47:50 STEP: Validating Cilium Installation
09:47:50 STEP: Performing Cilium controllers preflight check
09:47:50 STEP: Performing Cilium health check
09:47:50 STEP: Performing Cilium status preflight check
09:47:50 STEP: Checking whether host EP regenerated
09:47:57 STEP: Performing Cilium service preflight check
09:47:57 STEP: Performing K8s service preflight check
09:47:58 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hx5ff': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

09:47:58 STEP: Performing Cilium status preflight check
09:47:58 STEP: Performing Cilium health check
09:47:58 STEP: Checking whether host EP regenerated
09:47:58 STEP: Performing Cilium controllers preflight check
09:48:05 STEP: Performing Cilium service preflight check
09:48:05 STEP: Performing K8s service preflight check
09:48:06 STEP: Performing Cilium controllers preflight check
09:48:06 STEP: Performing Cilium health check
09:48:06 STEP: Performing Cilium status preflight check
09:48:06 STEP: Checking whether host EP regenerated
09:48:13 STEP: Performing Cilium service preflight check
09:48:13 STEP: Performing K8s service preflight check
09:48:15 STEP: Performing Cilium controllers preflight check
09:48:15 STEP: Performing Cilium health check
09:48:15 STEP: Performing Cilium status preflight check
09:48:15 STEP: Checking whether host EP regenerated
09:48:21 STEP: Performing Cilium service preflight check
09:48:21 STEP: Performing K8s service preflight check
09:48:23 STEP: Performing Cilium controllers preflight check
09:48:23 STEP: Performing Cilium health check
09:48:23 STEP: Checking whether host EP regenerated
09:48:23 STEP: Performing Cilium status preflight check
09:48:30 STEP: Performing Cilium service preflight check
09:48:30 STEP: Performing K8s service preflight check
09:48:36 STEP: Waiting for cilium-operator to be ready
09:48:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:48:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
Skipping externalTrafficPolicy=Local test from external node
09:48:36 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:30775"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:31502/hello"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s2 to "http://192.168.56.12:30775"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s2 to "tftp://192.168.56.12:31502/hello"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:30775"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:31502/hello"
09:48:41 STEP: Making 1 curl requests from k8s2 to "http://192.168.56.11:30775"
09:48:46 STEP: Making 1 curl requests from k8s2 to "tftp://192.168.56.11:31502/hello"
Skipping externalTrafficPolicy=Local test from external node
09:48:51 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:30654"
09:48:51 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:32229/hello"
09:48:52 STEP: Making 10 curl requests from pod (host netns) k8s2 to "http://[fd04::12]:30654"
09:48:52 STEP: Making 10 curl requests from pod (host netns) k8s2 to "tftp://[fd04::12]:32229/hello"
09:48:52 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:30654"
FAIL: Request from k8s1 to service http://[fd04::11]:30654 failed
Expected command: kubectl exec -n kube-system log-gatherer-6vhm4 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:30654 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 failed: :30641/1=28:30641/2=28:30641/3=28:30641/4=28:30641/5=28:30641/6=28:30641/7=28:30641/8=28:30641/9=28:30641/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

=== Test Finished at 2023-03-28T09:49:42Z====
09:49:42 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
09:49:42 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-98b4b9789-pbt8z            0/1     Running   0          37m     10.0.0.230      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-6f66c554f4-nntzh        1/1     Running   0          37m     10.0.0.115      k8s1   <none>           <none>
	 default             app1-754f779d8c-9nvjk              2/2     Running   0          2m51s   10.0.0.30       k8s1   <none>           <none>
	 default             app1-754f779d8c-ctrl7              2/2     Running   0          2m51s   10.0.0.236      k8s1   <none>           <none>
	 default             app2-86858f4bd6-p5bkj              1/1     Running   0          2m51s   10.0.0.91       k8s1   <none>           <none>
	 default             app3-57f78c8bdf-czmn7              1/1     Running   0          2m51s   10.0.0.75       k8s1   <none>           <none>
	 default             echo-9674cb9d4-6grtn               2/2     Running   0          2m51s   10.0.0.195      k8s1   <none>           <none>
	 default             echo-9674cb9d4-sz2xd               2/2     Running   0          2m51s   10.0.1.166      k8s2   <none>           <none>
	 default             test-k8s2-56f67cd755-gnjvh         2/2     Running   0          2m51s   10.0.1.194      k8s2   <none>           <none>
	 default             testclient-cbwlp                   1/1     Running   0          2m51s   10.0.1.132      k8s2   <none>           <none>
	 default             testclient-crz5w                   1/1     Running   0          2m51s   10.0.0.29       k8s1   <none>           <none>
	 default             testds-2wzkv                       2/2     Running   0          2m51s   10.0.0.66       k8s1   <none>           <none>
	 default             testds-jpkk2                       2/2     Running   0          2m51s   10.0.1.96       k8s2   <none>           <none>
	 kube-system         cilium-g4frx                       1/1     Running   0          2m33s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-hx5ff                       1/1     Running   0          2m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-74d5cb9875-cnxcc   1/1     Running   0          2m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-74d5cb9875-dwzvw   1/1     Running   0          2m33s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         coredns-567b6dd84-ztwnf            1/1     Running   0          3m24s   10.0.1.43       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          42m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          42m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          42m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-frs67                   1/1     Running   0          38m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-g2f5v                   1/1     Running   0          42m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          42m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-5sq86                 1/1     Running   0          37m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-6vhm4                 1/1     Running   0          37m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-dv5zh               1/1     Running   0          38m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-j8fzv               1/1     Running   0          38m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-g4frx cilium-hx5ff]
cmd: kubectl exec -n kube-system cilium-g4frx -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 2    10.109.86.64:9090      ClusterIP      1 => 10.0.0.115:9090 (active)      
	 3    10.111.170.63:3000     ClusterIP                                         
	 5    10.96.0.10:53          ClusterIP      1 => 10.0.1.43:53 (active)         
	 6    10.96.0.10:9153        ClusterIP      1 => 10.0.1.43:9153 (active)       
	 7    10.110.20.72:80        ClusterIP      1 => 10.0.0.30:80 (active)         
	                                            2 => 10.0.0.236:80 (active)        
	 8    10.110.20.72:69        ClusterIP      1 => 10.0.0.30:69 (active)         
	                                            2 => 10.0.0.236:69 (active)        
	 9    10.108.60.232:80       ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 10   10.108.60.232:69       ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 11   10.96.59.42:10069      ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 12   10.96.59.42:10080      ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 13   10.108.107.80:10080    ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 14   10.108.107.80:10069    ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 15   10.106.35.241:10080    ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 16   10.106.35.241:10069    ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 17   10.98.95.181:10080     ClusterIP      1 => 10.0.1.194:80 (active)        
	 18   10.98.95.181:10069     ClusterIP      1 => 10.0.1.194:69 (active)        
	 19   10.105.211.238:10080   ClusterIP      1 => 10.0.1.194:80 (active)        
	 20   10.105.211.238:10069   ClusterIP      1 => 10.0.1.194:69 (active)        
	 21   10.109.113.138:80      ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 22   10.100.79.16:80        ClusterIP      1 => 10.0.1.194:80 (active)        
	 23   10.96.254.206:20080    ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 24   10.96.254.206:20069    ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 25   10.96.18.6:80          ClusterIP      1 => 10.0.1.166:80 (active)        
	                                            2 => 10.0.0.195:80 (active)        
	 26   10.96.18.6:69          ClusterIP      1 => 10.0.1.166:69 (active)        
	                                            2 => 10.0.0.195:69 (active)        
	 27   [fd03::4b67]:80        ClusterIP      1 => [fd02::8a]:80 (active)        
	                                            2 => [fd02::d2]:80 (active)        
	 28   [fd03::4b67]:69        ClusterIP      1 => [fd02::8a]:69 (active)        
	                                            2 => [fd02::d2]:69 (active)        
	 29   [fd03::f5a0]:80        ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 30   [fd03::f5a0]:69        ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 31   [fd03::62f4]:10080     ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 32   [fd03::62f4]:10069     ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 33   [fd03::64b7]:10080     ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 34   [fd03::64b7]:10069     ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 35   [fd03::f78]:10080      ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 36   [fd03::f78]:10069      ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 37   [fd03::3f36]:10069     ClusterIP      1 => [fd02::143]:69 (active)       
	 38   [fd03::3f36]:10080     ClusterIP      1 => [fd02::143]:80 (active)       
	 39   [fd03::e80c]:10080     ClusterIP      1 => [fd02::143]:80 (active)       
	 40   [fd03::e80c]:10069     ClusterIP      1 => [fd02::143]:69 (active)       
	 41   [fd03::21f6]:20069     ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 42   [fd03::21f6]:20080     ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 43   [fd03::2f3f]:80        ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::84]:80 (active)        
	 44   [fd03::2f3f]:69        ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::84]:69 (active)        
	 45   10.107.235.32:80       ClusterIP      1 => 10.0.1.166:80 (active)        
	                                            2 => 10.0.0.195:80 (active)        
	 46   10.107.235.32:69       ClusterIP      1 => 10.0.1.166:69 (active)        
	                                            2 => 10.0.0.195:69 (active)        
	 47   [fd03::31e2]:80        ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::84]:80 (active)        
	 48   [fd03::31e2]:69        ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::84]:69 (active)        
	 49   10.109.217.167:443     ClusterIP      1 => 192.168.56.12:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-g4frx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                        
	 14         Disabled           Disabled          8250       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::1b2   10.0.1.43    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                   
	                                                            k8s:k8s-app=kube-dns                                                                                          
	 388        Enabled            Enabled           34351      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::1a8   10.0.1.166   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:name=echo                                                                                                 
	 485        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                            ready   
	                                                            reserved:host                                                                                                 
	 1187       Disabled           Disabled          43676      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::143   10.0.1.194   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:zgroup=test-k8s2                                                                                          
	 1235       Disabled           Disabled          3036       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::13c   10.0.1.132   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:zgroup=testDSClient                                                                                       
	 1817       Disabled           Disabled          8414       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::131   10.0.1.96    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:zgroup=testDS                                                                                             
	 3988       Disabled           Disabled          4          reserved:health                                                              fd02::19d   10.0.1.114   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-hx5ff -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.111.170.63:3000     ClusterIP                                         
	 3    10.96.0.10:53          ClusterIP      1 => 10.0.1.43:53 (active)         
	 4    10.96.0.10:9153        ClusterIP      1 => 10.0.1.43:9153 (active)       
	 5    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 6    10.109.86.64:9090      ClusterIP      1 => 10.0.0.115:9090 (active)      
	 7    10.110.20.72:80        ClusterIP      1 => 10.0.0.30:80 (active)         
	                                            2 => 10.0.0.236:80 (active)        
	 8    10.110.20.72:69        ClusterIP      1 => 10.0.0.30:69 (active)         
	                                            2 => 10.0.0.236:69 (active)        
	 9    10.108.60.232:80       ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 10   10.108.60.232:69       ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 11   10.96.59.42:10069      ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 12   10.96.59.42:10080      ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 13   10.108.107.80:10080    ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 14   10.108.107.80:10069    ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 15   10.106.35.241:10080    ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 16   10.106.35.241:10069    ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 17   10.98.95.181:10069     ClusterIP      1 => 10.0.1.194:69 (active)        
	 18   10.98.95.181:10080     ClusterIP      1 => 10.0.1.194:80 (active)        
	 19   10.105.211.238:10080   ClusterIP      1 => 10.0.1.194:80 (active)        
	 20   10.105.211.238:10069   ClusterIP      1 => 10.0.1.194:69 (active)        
	 21   10.109.113.138:80      ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 22   10.100.79.16:80        ClusterIP      1 => 10.0.1.194:80 (active)        
	 23   10.96.254.206:20080    ClusterIP      1 => 10.0.1.96:80 (active)         
	                                            2 => 10.0.0.66:80 (active)         
	 24   10.96.254.206:20069    ClusterIP      1 => 10.0.1.96:69 (active)         
	                                            2 => 10.0.0.66:69 (active)         
	 25   10.96.18.6:69          ClusterIP      1 => 10.0.1.166:69 (active)        
	                                            2 => 10.0.0.195:69 (active)        
	 26   10.96.18.6:80          ClusterIP      1 => 10.0.1.166:80 (active)        
	                                            2 => 10.0.0.195:80 (active)        
	 27   [fd03::4b67]:80        ClusterIP      1 => [fd02::8a]:80 (active)        
	                                            2 => [fd02::d2]:80 (active)        
	 28   [fd03::4b67]:69        ClusterIP      1 => [fd02::8a]:69 (active)        
	                                            2 => [fd02::d2]:69 (active)        
	 29   [fd03::f5a0]:80        ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 30   [fd03::f5a0]:69        ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 31   [fd03::62f4]:10080     ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 32   [fd03::62f4]:10069     ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 33   [fd03::64b7]:10080     ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 34   [fd03::64b7]:10069     ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 35   [fd03::f78]:10080      ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 36   [fd03::f78]:10069      ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 37   [fd03::3f36]:10080     ClusterIP      1 => [fd02::143]:80 (active)       
	 38   [fd03::3f36]:10069     ClusterIP      1 => [fd02::143]:69 (active)       
	 39   [fd03::e80c]:10080     ClusterIP      1 => [fd02::143]:80 (active)       
	 40   [fd03::e80c]:10069     ClusterIP      1 => [fd02::143]:69 (active)       
	 41   [fd03::21f6]:20080     ClusterIP      1 => [fd02::131]:80 (active)       
	                                            2 => [fd02::1b]:80 (active)        
	 42   [fd03::21f6]:20069     ClusterIP      1 => [fd02::131]:69 (active)       
	                                            2 => [fd02::1b]:69 (active)        
	 43   [fd03::2f3f]:80        ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::84]:80 (active)        
	 44   [fd03::2f3f]:69        ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::84]:69 (active)        
	 45   10.107.235.32:80       ClusterIP      1 => 10.0.1.166:80 (active)        
	                                            2 => 10.0.0.195:80 (active)        
	 46   10.107.235.32:69       ClusterIP      1 => 10.0.1.166:69 (active)        
	                                            2 => 10.0.0.195:69 (active)        
	 47   [fd03::31e2]:80        ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::84]:80 (active)        
	 48   [fd03::31e2]:69        ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::84]:69 (active)        
	 49   10.109.217.167:443     ClusterIP      1 => 192.168.56.11:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-hx5ff -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                   
	 254        Disabled           Disabled          34813      k8s:id=app3                                                              fd02::b5   10.0.0.75    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 522        Enabled            Enabled           34351      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::84   10.0.0.195   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:name=echo                                                                                            
	 725        Disabled           Disabled          4          reserved:health                                                          fd02::5d   10.0.0.206   ready   
	 849        Disabled           Disabled          8414       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1b   10.0.0.66    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testDS                                                                                        
	 1235       Disabled           Disabled          35324      k8s:appSecond=true                                                       fd02::53   10.0.0.91    ready   
	                                                            k8s:id=app2                                                                                              
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 1286       Disabled           Disabled          24222      k8s:id=app1                                                              fd02::8a   10.0.0.30    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 1622       Disabled           Disabled          3036       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::b9   10.0.0.29    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                  
	 2216       Disabled           Disabled          24222      k8s:id=app1                                                              fd02::d2   10.0.0.236   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 3961       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                       ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                              
	                                                            reserved:host                                                                                            
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
09:49:54 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|ad3118f5_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1477/artifact/ad3118f5_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1477/artifact/test_results_Cilium-PR-K8s-1.25-kernel-4.19_1477_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19/1477/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Mar 28, 2023
@maintainer-s-little-helper
Copy link
Author

PR #24575 hit this flake with 86.61% similarity:

Click to show.

Test Name

K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) Tests NodePort inside cluster (kube-proxy) with IPSec and externalTrafficPolicy=Local

Failure Output

FAIL: Request from k8s1 to service http://[fd04::11]:30440 failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Request from k8s1 to service http://[fd04::11]:30440 failed
Expected command: kubectl exec -n kube-system log-gatherer-bg8pn -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:30440 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 failed: :14794/1=28:14794/2=28:14794/3=28:14794/4=28:14794/5=28:14794/6=28:14794/7=28:14794/8=28:14794/9=28:14794/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/k8s/service_helpers.go:863

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-9lg5n cilium-q4v9b]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::hairpin-validation-policy 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
grafana-98b4b9789-l47nr       false     false
echo-9674cb9d4-gr828          false     false
testds-d8hkc                  false     false
prometheus-6f66c554f4-c57sl   false     false
app1-754f779d8c-7k9ft         false     false
echo-9674cb9d4-txr2d          false     false
test-k8s2-56f67cd755-knc76    false     false
testclient-sq7c8              false     false
testds-z7b6j                  false     false
coredns-567b6dd84-8lcn8       false     false
app1-754f779d8c-4whj8         false     false
app2-86858f4bd6-j78sz         false     false
app3-57f78c8bdf-5sncf         false     false
testclient-jqqss              false     false
Cilium agent 'cilium-9lg5n': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 70 Failed 0
Cilium agent 'cilium-q4v9b': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0


Standard Error

Click to show.
10:19:20 STEP: Deploying ipsec_secret.yaml in namespace kube-system
10:19:20 STEP: Installing Cilium
10:19:30 STEP: Waiting for Cilium to become ready
10:19:48 STEP: Validating if Kubernetes DNS is deployed
10:19:48 STEP: Checking if deployment is ready
10:19:48 STEP: Checking if kube-dns service is plumbed correctly
10:19:48 STEP: Checking if pods have identity
10:19:48 STEP: Checking if DNS can resolve
10:19:51 STEP: Kubernetes DNS is up and operational
10:19:51 STEP: Validating Cilium Installation
10:19:51 STEP: Performing Cilium controllers preflight check
10:19:51 STEP: Performing Cilium health check
10:19:51 STEP: Checking whether host EP regenerated
10:19:51 STEP: Performing Cilium status preflight check
10:19:58 STEP: Performing Cilium service preflight check
10:19:58 STEP: Performing K8s service preflight check
10:20:01 STEP: Waiting for cilium-operator to be ready
10:20:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:20:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
Skipping externalTrafficPolicy=Local test from external node
10:20:01 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:31955"
10:20:01 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:32466/hello"
10:20:02 STEP: Making 10 curl requests from pod (host netns) k8s2 to "http://192.168.56.12:31955"
10:20:02 STEP: Making 10 curl requests from pod (host netns) k8s2 to "tftp://192.168.56.12:32466/hello"
10:20:02 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:31955"
10:20:02 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:32466/hello"
10:20:05 STEP: Making 1 curl requests from k8s2 to "http://192.168.56.11:31955"
10:20:11 STEP: Making 1 curl requests from k8s2 to "tftp://192.168.56.11:32466/hello"
Skipping externalTrafficPolicy=Local test from external node
10:20:16 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:30440"
10:20:16 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:30558/hello"
10:20:16 STEP: Making 10 curl requests from pod (host netns) k8s2 to "http://[fd04::12]:30440"
10:20:16 STEP: Making 10 curl requests from pod (host netns) k8s2 to "tftp://[fd04::12]:30558/hello"
10:20:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:30440"
FAIL: Request from k8s1 to service http://[fd04::11]:30440 failed
Expected command: kubectl exec -n kube-system log-gatherer-bg8pn -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:30440 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 failed: :14794/1=28:14794/2=28:14794/3=28:14794/4=28:14794/5=28:14794/6=28:14794/7=28:14794/8=28:14794/9=28:14794/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

=== Test Finished at 2023-03-28T10:21:07Z====
10:21:07 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
10:21:07 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-98b4b9789-l47nr           1/1     Running   0          32m    10.0.0.113      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-6f66c554f4-c57sl       1/1     Running   0          32m    10.0.0.204      k8s1   <none>           <none>
	 default             app1-754f779d8c-4whj8             2/2     Running   0          2m2s   10.0.0.118      k8s1   <none>           <none>
	 default             app1-754f779d8c-7k9ft             2/2     Running   0          2m2s   10.0.0.76       k8s1   <none>           <none>
	 default             app2-86858f4bd6-j78sz             1/1     Running   0          2m2s   10.0.0.75       k8s1   <none>           <none>
	 default             app3-57f78c8bdf-5sncf             1/1     Running   0          2m2s   10.0.0.10       k8s1   <none>           <none>
	 default             echo-9674cb9d4-gr828              2/2     Running   0          2m2s   10.0.1.37       k8s2   <none>           <none>
	 default             echo-9674cb9d4-txr2d              2/2     Running   0          2m2s   10.0.0.236      k8s1   <none>           <none>
	 default             test-k8s2-56f67cd755-knc76        2/2     Running   0          2m2s   10.0.1.120      k8s2   <none>           <none>
	 default             testclient-jqqss                  1/1     Running   0          2m2s   10.0.1.144      k8s2   <none>           <none>
	 default             testclient-sq7c8                  1/1     Running   0          2m2s   10.0.0.6        k8s1   <none>           <none>
	 default             testds-d8hkc                      2/2     Running   0          2m2s   10.0.0.48       k8s1   <none>           <none>
	 default             testds-z7b6j                      2/2     Running   0          2m2s   10.0.1.31       k8s2   <none>           <none>
	 kube-system         cilium-9lg5n                      1/1     Running   0          102s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-9ff7898f5-hpk46   1/1     Running   0          102s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-9ff7898f5-w2r56   1/1     Running   0          102s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-q4v9b                      1/1     Running   0          102s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         coredns-567b6dd84-8lcn8           1/1     Running   0          10m    10.0.0.146      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                         1/1     Running   0          38m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1               1/1     Running   0          38m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1      1/1     Running   0          38m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-2r2jd                  1/1     Running   0          33m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-v6bsj                  1/1     Running   0          37m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1               1/1     Running   0          38m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-bg8pn                1/1     Running   0          33m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-xhqk4                1/1     Running   0          33m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-rqs7p              1/1     Running   0          33m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-wsvdg              1/1     Running   0          33m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-9lg5n cilium-q4v9b]
cmd: kubectl exec -n kube-system cilium-9lg5n -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.0.10:53          ClusterIP      1 => 10.0.0.146:53 (active)        
	 2    10.96.0.10:9153        ClusterIP      1 => 10.0.0.146:9153 (active)      
	 3    10.111.94.242:3000     ClusterIP      1 => 10.0.0.113:3000 (active)      
	 4    10.108.148.109:9090    ClusterIP      1 => 10.0.0.204:9090 (active)      
	 6    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 8    10.100.62.194:80       ClusterIP      1 => 10.0.0.118:80 (active)        
	                                            2 => 10.0.0.76:80 (active)         
	 9    10.100.62.194:69       ClusterIP      1 => 10.0.0.118:69 (active)        
	                                            2 => 10.0.0.76:69 (active)         
	 10   10.106.28.241:80       ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 11   10.106.28.241:69       ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 12   10.104.60.130:10080    ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 13   10.104.60.130:10069    ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 14   10.105.57.84:10069     ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 15   10.105.57.84:10080     ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 16   10.110.135.157:10080   ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 17   10.110.135.157:10069   ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 18   10.100.103.72:10080    ClusterIP      1 => 10.0.1.120:80 (active)        
	 19   10.100.103.72:10069    ClusterIP      1 => 10.0.1.120:69 (active)        
	 20   10.111.186.182:10080   ClusterIP      1 => 10.0.1.120:80 (active)        
	 21   10.111.186.182:10069   ClusterIP      1 => 10.0.1.120:69 (active)        
	 22   10.102.182.227:80      ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 23   10.101.102.112:80      ClusterIP      1 => 10.0.1.120:80 (active)        
	 24   10.104.43.186:20069    ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 25   10.104.43.186:20080    ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 26   10.102.62.188:69       ClusterIP      1 => 10.0.0.236:69 (active)        
	                                            2 => 10.0.1.37:69 (active)         
	 27   10.102.62.188:80       ClusterIP      1 => 10.0.0.236:80 (active)        
	                                            2 => 10.0.1.37:80 (active)         
	 28   [fd03::c36e]:80        ClusterIP      1 => [fd02::d7]:80 (active)        
	                                            2 => [fd02::8]:80 (active)         
	 29   [fd03::c36e]:69        ClusterIP      1 => [fd02::d7]:69 (active)        
	                                            2 => [fd02::8]:69 (active)         
	 30   [fd03::4cea]:80        ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 31   [fd03::4cea]:69        ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 32   [fd03::e578]:10080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 33   [fd03::e578]:10069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 34   [fd03::3d7a]:10069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 35   [fd03::3d7a]:10080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 36   [fd03::8c51]:10080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 37   [fd03::8c51]:10069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 38   [fd03::9730]:10080     ClusterIP      1 => [fd02::1b7]:80 (active)       
	 39   [fd03::9730]:10069     ClusterIP      1 => [fd02::1b7]:69 (active)       
	 40   [fd03::82be]:10069     ClusterIP      1 => [fd02::1b7]:69 (active)       
	 41   [fd03::82be]:10080     ClusterIP      1 => [fd02::1b7]:80 (active)       
	 42   [fd03::fed4]:20080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 43   [fd03::fed4]:20069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 44   [fd03::7d4d]:69        ClusterIP      1 => [fd02::8b]:69 (active)        
	                                            2 => [fd02::144]:69 (active)       
	 45   [fd03::7d4d]:80        ClusterIP      1 => [fd02::8b]:80 (active)        
	                                            2 => [fd02::144]:80 (active)       
	 46   10.107.132.222:80      ClusterIP      1 => 10.0.0.236:80 (active)        
	                                            2 => 10.0.1.37:80 (active)         
	 47   10.107.132.222:69      ClusterIP      1 => 10.0.0.236:69 (active)        
	                                            2 => 10.0.1.37:69 (active)         
	 48   [fd03::8171]:80        ClusterIP      1 => [fd02::8b]:80 (active)        
	                                            2 => [fd02::144]:80 (active)       
	 49   [fd03::8171]:69        ClusterIP      1 => [fd02::8b]:69 (active)        
	                                            2 => [fd02::144]:69 (active)       
	 50   10.106.243.80:443      ClusterIP      1 => 192.168.56.11:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-9lg5n -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                             
	 615        Enabled            Enabled           5549       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::8b   10.0.0.236   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=echo                                                                                                      
	 660        Disabled           Disabled          40078      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::8a   10.0.0.6     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testDSClient                                                                                            
	 670        Disabled           Disabled          4          reserved:health                                                                    fd02::2a   10.0.0.130   ready   
	 768        Disabled           Disabled          54301      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::81   10.0.0.48    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testDS                                                                                                  
	 993        Disabled           Disabled          20014      k8s:appSecond=true                                                                 fd02::7d   10.0.0.75    ready   
	                                                            k8s:id=app2                                                                                                        
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testapp                                                                                                 
	 1025       Disabled           Disabled          61221      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system         fd02::27   10.0.0.146   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                               
	 1846       Disabled           Disabled          40889      k8s:id=app1                                                                        fd02::d7   10.0.0.118   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testapp                                                                                                 
	 2031       Disabled           Disabled          40889      k8s:id=app1                                                                        fd02::8    10.0.0.76    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testapp                                                                                                 
	 3460       Disabled           Disabled          13310      k8s:app=prometheus                                                                 fd02::38   10.0.0.204   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 3813       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                          
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                        
	                                                            reserved:host                                                                                                      
	 3881       Disabled           Disabled          4357       k8s:id=app3                                                                        fd02::9c   10.0.0.10    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testapp                                                                                                 
	 4035       Disabled           Disabled          11022      k8s:app=grafana                                                                    fd02::ee   10.0.0.113   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-q4v9b -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 2    10.96.0.10:53          ClusterIP      1 => 10.0.0.146:53 (active)        
	 3    10.96.0.10:9153        ClusterIP      1 => 10.0.0.146:9153 (active)      
	 4    10.111.94.242:3000     ClusterIP      1 => 10.0.0.113:3000 (active)      
	 5    10.108.148.109:9090    ClusterIP      1 => 10.0.0.204:9090 (active)      
	 7    10.100.62.194:69       ClusterIP      1 => 10.0.0.118:69 (active)        
	                                            2 => 10.0.0.76:69 (active)         
	 8    10.100.62.194:80       ClusterIP      1 => 10.0.0.118:80 (active)        
	                                            2 => 10.0.0.76:80 (active)         
	 9    10.106.28.241:80       ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 10   10.106.28.241:69       ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 11   10.104.60.130:10080    ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 12   10.104.60.130:10069    ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 13   10.105.57.84:10080     ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 14   10.105.57.84:10069     ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 15   10.110.135.157:10080   ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 16   10.110.135.157:10069   ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 17   10.100.103.72:10069    ClusterIP      1 => 10.0.1.120:69 (active)        
	 18   10.100.103.72:10080    ClusterIP      1 => 10.0.1.120:80 (active)        
	 19   10.111.186.182:10080   ClusterIP      1 => 10.0.1.120:80 (active)        
	 20   10.111.186.182:10069   ClusterIP      1 => 10.0.1.120:69 (active)        
	 21   10.102.182.227:80      ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 22   10.101.102.112:80      ClusterIP      1 => 10.0.1.120:80 (active)        
	 23   10.104.43.186:20069    ClusterIP      1 => 10.0.1.31:69 (active)         
	                                            2 => 10.0.0.48:69 (active)         
	 24   10.104.43.186:20080    ClusterIP      1 => 10.0.1.31:80 (active)         
	                                            2 => 10.0.0.48:80 (active)         
	 25   10.102.62.188:80       ClusterIP      1 => 10.0.0.236:80 (active)        
	                                            2 => 10.0.1.37:80 (active)         
	 26   10.102.62.188:69       ClusterIP      1 => 10.0.0.236:69 (active)        
	                                            2 => 10.0.1.37:69 (active)         
	 27   [fd03::c36e]:69        ClusterIP      1 => [fd02::d7]:69 (active)        
	                                            2 => [fd02::8]:69 (active)         
	 28   [fd03::c36e]:80        ClusterIP      1 => [fd02::d7]:80 (active)        
	                                            2 => [fd02::8]:80 (active)         
	 29   [fd03::4cea]:80        ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 30   [fd03::4cea]:69        ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 31   [fd03::e578]:10080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 32   [fd03::e578]:10069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 33   [fd03::3d7a]:10080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 34   [fd03::3d7a]:10069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 35   [fd03::8c51]:10080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 36   [fd03::8c51]:10069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 37   [fd03::9730]:10069     ClusterIP      1 => [fd02::1b7]:69 (active)       
	 38   [fd03::9730]:10080     ClusterIP      1 => [fd02::1b7]:80 (active)       
	 39   [fd03::82be]:10080     ClusterIP      1 => [fd02::1b7]:80 (active)       
	 40   [fd03::82be]:10069     ClusterIP      1 => [fd02::1b7]:69 (active)       
	 41   [fd03::fed4]:20069     ClusterIP      1 => [fd02::108]:69 (active)       
	                                            2 => [fd02::81]:69 (active)        
	 42   [fd03::fed4]:20080     ClusterIP      1 => [fd02::108]:80 (active)       
	                                            2 => [fd02::81]:80 (active)        
	 43   [fd03::7d4d]:80        ClusterIP      1 => [fd02::8b]:80 (active)        
	                                            2 => [fd02::144]:80 (active)       
	 44   [fd03::7d4d]:69        ClusterIP      1 => [fd02::8b]:69 (active)        
	                                            2 => [fd02::144]:69 (active)       
	 45   10.107.132.222:69      ClusterIP      1 => 10.0.0.236:69 (active)        
	                                            2 => 10.0.1.37:69 (active)         
	 46   10.107.132.222:80      ClusterIP      1 => 10.0.0.236:80 (active)        
	                                            2 => 10.0.1.37:80 (active)         
	 47   [fd03::8171]:69        ClusterIP      1 => [fd02::8b]:69 (active)        
	                                            2 => [fd02::144]:69 (active)       
	 48   [fd03::8171]:80        ClusterIP      1 => [fd02::8b]:80 (active)        
	                                            2 => [fd02::144]:80 (active)       
	 49   10.106.243.80:443      ClusterIP      1 => 192.168.56.12:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-q4v9b -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                    
	 593        Enabled            Enabled           5549       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::144   10.0.1.37    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=echo                                                                                             
	 1601       Disabled           Disabled          5666       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1b7   10.0.1.120   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                      
	 2422       Disabled           Disabled          4          reserved:health                                                          fd02::160   10.0.1.219   ready   
	 2589       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                        ready   
	                                                            reserved:host                                                                                             
	 2802       Disabled           Disabled          40078      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1e5   10.0.1.144   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                   
	 2896       Disabled           Disabled          54301      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::108   10.0.1.31    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDS                                                                                         
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:21:19 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|7146ab17_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1479/artifact/7146ab17_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1479/artifact/test_results_Cilium-PR-K8s-1.25-kernel-4.19_1479_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19/1479/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@julianwiedmann
Copy link
Member

julianwiedmann commented Mar 28, 2023

Looks like the usual fallout from #24557 - CI is pulling in the re-introduced test, but the linked PRs are not based on the corresponding code fixes from #24557.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

1 participant