Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy Tests NodePort with L7 Policy #24961

Closed
maintainer-s-little-helper bot opened this issue Apr 18, 2023 · 3 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy Tests NodePort with L7 Policy

Failure Output

FAIL: Request from testclient-vsbl2 pod to service tftp://[fd04::11]:30220/hello failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Request from testclient-vsbl2 pod to service tftp://[fd04::11]:30220/hello failed
Expected command: kubectl exec -n default testclient-vsbl2 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30220/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=38946
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/1 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=49888
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/2 exit code: 0
	 
	 Hostname: testds-c2427
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=6147
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/3 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=55748
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/4 exit code: 0
	 
	 Hostname: testds-c2427
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=58898
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/5 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=49359
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/6 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=46641
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/7 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=46263
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/9 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=40100
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/10 exit code: 0
	 failed: :5119/8=72
	 
Stderr:
 	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/k8s/service_helpers.go:514

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-dbtvg cilium-xwscp]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::hairpin-validation-policy default::l7-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
app2-86858f4bd6-ghql6        false     false
app3-57f78c8bdf-49rld        false     false
echo-9674cb9d4-2q9pf         false     false
echo-9674cb9d4-vk9xs         false     false
test-k8s2-56f67cd755-5qwt4   false     false
testclient-vsbl2             false     false
testds-5ll8l                 false     false
testds-c2427                 false     false
coredns-567b6dd84-jbxhd      false     false
app1-754f779d8c-nh6xv        false     false
app1-754f779d8c-pfpcf        false     false
testclient-fqsc6             false     false
Cilium agent 'cilium-dbtvg': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 48 Failed 0
Cilium agent 'cilium-xwscp': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 58 Failed 0


Standard Error

Click to show.
13:12:38 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy
13:12:38 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l7-policy-demo.yaml
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:30220/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30363/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:30363/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:30363/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:30363/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:31026"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:31488"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:31488"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:31488"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd03::d9fb]:10069/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:31488"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd03::d9fb]:10080"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:30363/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30363/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.96.160.73:10080"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.96.160.73:10069/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:31488"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:31026"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::11]:30220/hello"
13:12:45 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:31488"
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://[::ffff:192.168.56.11]:31488
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://192.168.56.11:31488
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://[::ffff:192.168.56.12]:31488
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://10.96.160.73:10080
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://[::ffff:192.168.56.11]:30363/hello
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://[fd04::11]:30220/hello
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://10.96.160.73:10069/hello
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://[fd04::12]:30220/hello
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://192.168.56.11:30363/hello
13:12:45 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://[fd04::12]:31026
13:12:46 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://[fd04::11]:31026
13:12:46 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://[::ffff:192.168.56.12]:30363/hello
13:12:46 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://[fd03::d9fb]:10080
13:12:46 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://192.168.56.12:30363/hello
13:12:46 STEP: Making 10 curl requests from testclient-fqsc6 pod to service http://192.168.56.12:31488
13:12:46 STEP: Making 10 curl requests from testclient-fqsc6 pod to service tftp://[fd03::d9fb]:10069/hello
13:12:46 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://192.168.56.11:31488
13:12:46 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://[::ffff:192.168.56.12]:31488
13:12:46 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://[::ffff:192.168.56.11]:31488
13:12:46 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://[fd04::12]:31026
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://[fd03::d9fb]:10080
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://[fd04::12]:30220/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://[::ffff:192.168.56.11]:30363/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://[fd04::11]:31026
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://[::ffff:192.168.56.12]:30363/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://192.168.56.12:31488
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://192.168.56.11:30363/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://[fd04::11]:30220/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://192.168.56.12:30363/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://10.96.160.73:10069/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service tftp://[fd03::d9fb]:10069/hello
13:12:47 STEP: Making 10 curl requests from testclient-vsbl2 pod to service http://10.96.160.73:10080
FAIL: Request from testclient-vsbl2 pod to service tftp://[fd04::11]:30220/hello failed
Expected command: kubectl exec -n default testclient-vsbl2 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30220/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=38946
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/1 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=49888
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/2 exit code: 0
	 
	 Hostname: testds-c2427
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=6147
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/3 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=55748
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/4 exit code: 0
	 
	 Hostname: testds-c2427
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=58898
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/5 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=49359
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/6 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=46641
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/7 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=46263
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/9 exit code: 0
	 
	 Hostname: testds-5ll8l
	 
	 Request Information:
	 	client_address=fd02::17c
	 	client_port=40100
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 5119/10 exit code: 0
	 failed: :5119/8=72
	 
Stderr:
 	 command terminated with exit code 42
	 

=== Test Finished at 2023-04-18T13:12:48Z====
13:12:48 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
13:12:48 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-98b4b9789-nvncl            0/1     Running   0          17m     10.0.0.231      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-6f66c554f4-rbhk4        1/1     Running   0          17m     10.0.0.119      k8s1   <none>           <none>
	 default             app1-754f779d8c-nh6xv              2/2     Running   0          5m21s   10.0.1.203      k8s1   <none>           <none>
	 default             app1-754f779d8c-pfpcf              2/2     Running   0          5m21s   10.0.1.91       k8s1   <none>           <none>
	 default             app2-86858f4bd6-ghql6              1/1     Running   0          5m21s   10.0.1.191      k8s1   <none>           <none>
	 default             app3-57f78c8bdf-49rld              1/1     Running   0          5m21s   10.0.1.171      k8s1   <none>           <none>
	 default             echo-9674cb9d4-2q9pf               2/2     Running   0          5m21s   10.0.0.82       k8s2   <none>           <none>
	 default             echo-9674cb9d4-vk9xs               2/2     Running   0          5m21s   10.0.1.206      k8s1   <none>           <none>
	 default             test-k8s2-56f67cd755-5qwt4         2/2     Running   0          5m21s   10.0.0.241      k8s2   <none>           <none>
	 default             testclient-fqsc6                   1/1     Running   0          5m21s   10.0.1.10       k8s1   <none>           <none>
	 default             testclient-vsbl2                   1/1     Running   0          5m21s   10.0.0.73       k8s2   <none>           <none>
	 default             testds-5ll8l                       2/2     Running   0          5m21s   10.0.0.41       k8s2   <none>           <none>
	 default             testds-c2427                       2/2     Running   0          5m21s   10.0.1.49       k8s1   <none>           <none>
	 kube-system         cilium-dbtvg                       1/1     Running   0          100s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-75ccb4f47c-h25vt   1/1     Running   0          100s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-75ccb4f47c-h5pbz   1/1     Running   0          100s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-xwscp                       1/1     Running   0          100s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-567b6dd84-jbxhd            1/1     Running   0          5m56s   10.0.0.90       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          22m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          22m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          22m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-ppbcz                   1/1     Running   0          17m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-wjgkw                   1/1     Running   0          21m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          22m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-5lsz6                 1/1     Running   0          17m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-nbs4f                 1/1     Running   0          17m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-4x7p8               1/1     Running   0          17m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-5nxx6               1/1     Running   0          17m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-dbtvg cilium-xwscp]
cmd: kubectl exec -n kube-system cilium-dbtvg -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 2    10.96.0.10:53          ClusterIP      1 => 10.0.0.90:53 (active)         
	 3    10.96.0.10:9153        ClusterIP      1 => 10.0.0.90:9153 (active)       
	 4    10.110.229.96:3000     ClusterIP                                         
	 5    10.105.173.248:9090    ClusterIP      1 => 10.0.0.119:9090 (active)      
	 7    10.98.249.57:80        ClusterIP      1 => 10.0.1.91:80 (active)         
	                                            2 => 10.0.1.203:80 (active)        
	 8    10.98.249.57:69        ClusterIP      1 => 10.0.1.91:69 (active)         
	                                            2 => 10.0.1.203:69 (active)        
	 9    10.102.161.84:80       ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 10   10.102.161.84:69       ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 11   10.96.160.73:10080     ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 12   10.96.160.73:10069     ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 13   10.101.188.204:10080   ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 14   10.101.188.204:10069   ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 15   10.107.146.30:10080    ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 16   10.107.146.30:10069    ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 17   10.104.12.236:10080    ClusterIP      1 => 10.0.0.241:80 (active)        
	 18   10.104.12.236:10069    ClusterIP      1 => 10.0.0.241:69 (active)        
	 19   10.105.166.221:10080   ClusterIP      1 => 10.0.0.241:80 (active)        
	 20   10.105.166.221:10069   ClusterIP      1 => 10.0.0.241:69 (active)        
	 21   10.110.116.201:80      ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 22   10.100.20.216:80       ClusterIP      1 => 10.0.0.241:80 (active)        
	 23   10.102.138.112:20069   ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 24   10.102.138.112:20080   ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 25   10.105.221.96:80       ClusterIP      1 => 10.0.1.206:80 (active)        
	                                            2 => 10.0.0.82:80 (active)         
	 26   10.105.221.96:69       ClusterIP      1 => 10.0.1.206:69 (active)        
	                                            2 => 10.0.0.82:69 (active)         
	 27   [fd03::d125]:80        ClusterIP      1 => [fd02::16e]:80 (active)       
	                                            2 => [fd02::1f1]:80 (active)       
	 28   [fd03::d125]:69        ClusterIP      1 => [fd02::16e]:69 (active)       
	                                            2 => [fd02::1f1]:69 (active)       
	 29   [fd03::42db]:69        ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 30   [fd03::42db]:80        ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 31   [fd03::d9fb]:10069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 32   [fd03::d9fb]:10080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 33   [fd03::4824]:10080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 34   [fd03::4824]:10069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 35   [fd03::1a29]:10080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 36   [fd03::1a29]:10069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 37   [fd03::1533]:10080     ClusterIP      1 => [fd02::19]:80 (active)        
	 38   [fd03::1533]:10069     ClusterIP      1 => [fd02::19]:69 (active)        
	 39   [fd03::9d3d]:10080     ClusterIP      1 => [fd02::19]:80 (active)        
	 40   [fd03::9d3d]:10069     ClusterIP      1 => [fd02::19]:69 (active)        
	 41   [fd03::3475]:20080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 42   [fd03::3475]:20069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 43   [fd03::422d]:80        ClusterIP      1 => [fd02::128]:80 (active)       
	                                            2 => [fd02::ee]:80 (active)        
	 44   [fd03::422d]:69        ClusterIP      1 => [fd02::128]:69 (active)       
	                                            2 => [fd02::ee]:69 (active)        
	 45   10.99.36.99:80         ClusterIP      1 => 10.0.1.206:80 (active)        
	                                            2 => 10.0.0.82:80 (active)         
	 46   10.99.36.99:69         ClusterIP      1 => 10.0.1.206:69 (active)        
	                                            2 => 10.0.0.82:69 (active)         
	 47   [fd03::12e0]:80        ClusterIP      1 => [fd02::128]:80 (active)       
	                                            2 => [fd02::ee]:80 (active)        
	 48   [fd03::12e0]:69        ClusterIP      1 => [fd02::128]:69 (active)       
	                                            2 => [fd02::ee]:69 (active)        
	 49   10.102.233.207:443     ClusterIP      1 => 192.168.56.12:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-dbtvg -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                       
	 201        Enabled            Disabled          38932      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::d7   10.0.0.41    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=testDS                                                                                            
	 846        Disabled           Disabled          25216      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::19   10.0.0.241   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=test-k8s2                                                                                         
	 923        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                           ready   
	                                                            reserved:host                                                                                                
	 2460       Enabled            Enabled           31372      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::ee   10.0.0.82    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:name=echo                                                                                                
	 3132       Disabled           Disabled          5187       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::6f   10.0.0.90    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                              
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                  
	                                                            k8s:k8s-app=kube-dns                                                                                         
	 3428       Disabled           Disabled          4          reserved:health                                                              fd02::5c   10.0.0.19    ready   
	 3536       Disabled           Disabled          29155      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::4c   10.0.0.73    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=testDSClient                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-xwscp -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 2    10.96.0.10:9153        ClusterIP      1 => 10.0.0.90:9153 (active)       
	 3    10.96.0.10:53          ClusterIP      1 => 10.0.0.90:53 (active)         
	 4    10.110.229.96:3000     ClusterIP                                         
	 5    10.105.173.248:9090    ClusterIP      1 => 10.0.0.119:9090 (active)      
	 7    10.98.249.57:69        ClusterIP      1 => 10.0.1.91:69 (active)         
	                                            2 => 10.0.1.203:69 (active)        
	 8    10.98.249.57:80        ClusterIP      1 => 10.0.1.91:80 (active)         
	                                            2 => 10.0.1.203:80 (active)        
	 9    10.102.161.84:80       ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 10   10.102.161.84:69       ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 11   10.96.160.73:10080     ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 12   10.96.160.73:10069     ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 13   10.101.188.204:10080   ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 14   10.101.188.204:10069   ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 15   10.107.146.30:10080    ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 16   10.107.146.30:10069    ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 17   10.104.12.236:10069    ClusterIP      1 => 10.0.0.241:69 (active)        
	 18   10.104.12.236:10080    ClusterIP      1 => 10.0.0.241:80 (active)        
	 19   10.105.166.221:10080   ClusterIP      1 => 10.0.0.241:80 (active)        
	 20   10.105.166.221:10069   ClusterIP      1 => 10.0.0.241:69 (active)        
	 21   10.110.116.201:80      ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 22   10.100.20.216:80       ClusterIP      1 => 10.0.0.241:80 (active)        
	 23   10.102.138.112:20069   ClusterIP      1 => 10.0.1.49:69 (active)         
	                                            2 => 10.0.0.41:69 (active)         
	 24   10.102.138.112:20080   ClusterIP      1 => 10.0.1.49:80 (active)         
	                                            2 => 10.0.0.41:80 (active)         
	 25   10.105.221.96:69       ClusterIP      1 => 10.0.1.206:69 (active)        
	                                            2 => 10.0.0.82:69 (active)         
	 26   10.105.221.96:80       ClusterIP      1 => 10.0.1.206:80 (active)        
	                                            2 => 10.0.0.82:80 (active)         
	 27   [fd03::d125]:80        ClusterIP      1 => [fd02::16e]:80 (active)       
	                                            2 => [fd02::1f1]:80 (active)       
	 28   [fd03::d125]:69        ClusterIP      1 => [fd02::16e]:69 (active)       
	                                            2 => [fd02::1f1]:69 (active)       
	 29   [fd03::42db]:80        ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 30   [fd03::42db]:69        ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 31   [fd03::d9fb]:10080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 32   [fd03::d9fb]:10069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 33   [fd03::4824]:10080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 34   [fd03::4824]:10069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 35   [fd03::1a29]:10080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 36   [fd03::1a29]:10069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 37   [fd03::1533]:10080     ClusterIP      1 => [fd02::19]:80 (active)        
	 38   [fd03::1533]:10069     ClusterIP      1 => [fd02::19]:69 (active)        
	 39   [fd03::9d3d]:10080     ClusterIP      1 => [fd02::19]:80 (active)        
	 40   [fd03::9d3d]:10069     ClusterIP      1 => [fd02::19]:69 (active)        
	 41   [fd03::3475]:20069     ClusterIP      1 => [fd02::1a8]:69 (active)       
	                                            2 => [fd02::d7]:69 (active)        
	 42   [fd03::3475]:20080     ClusterIP      1 => [fd02::1a8]:80 (active)       
	                                            2 => [fd02::d7]:80 (active)        
	 43   [fd03::422d]:80        ClusterIP      1 => [fd02::128]:80 (active)       
	                                            2 => [fd02::ee]:80 (active)        
	 44   [fd03::422d]:69        ClusterIP      1 => [fd02::128]:69 (active)       
	                                            2 => [fd02::ee]:69 (active)        
	 45   10.99.36.99:80         ClusterIP      1 => 10.0.1.206:80 (active)        
	                                            2 => 10.0.0.82:80 (active)         
	 46   10.99.36.99:69         ClusterIP      1 => 10.0.1.206:69 (active)        
	                                            2 => 10.0.0.82:69 (active)         
	 47   [fd03::12e0]:80        ClusterIP      1 => [fd02::128]:80 (active)       
	                                            2 => [fd02::ee]:80 (active)        
	 48   [fd03::12e0]:69        ClusterIP      1 => [fd02::128]:69 (active)       
	                                            2 => [fd02::ee]:69 (active)        
	 49   10.102.233.207:443     ClusterIP      1 => 192.168.56.11:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-xwscp -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                    
	 232        Enabled            Disabled          38932      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1a8   10.0.1.49    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDS                                                                                         
	 339        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                        ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                 
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                               
	                                                            reserved:host                                                                                             
	 1728       Disabled           Disabled          18432      k8s:appSecond=true                                                       fd02::1b2   10.0.1.191   ready   
	                                                            k8s:id=app2                                                                                               
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testapp                                                                                        
	 2106       Enabled            Enabled           31372      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::128   10.0.1.206   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=echo                                                                                             
	 2397       Disabled           Disabled          29155      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::18a   10.0.1.10    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                   
	 3038       Disabled           Disabled          36421      k8s:id=app1                                                              fd02::16e   10.0.1.91    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testapp                                                                                        
	 3115       Disabled           Disabled          36421      k8s:id=app1                                                              fd02::1f1   10.0.1.203   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testapp                                                                                        
	 3710       Disabled           Disabled          4          reserved:health                                                          fd02::1de   10.0.1.202   ready   
	 3787       Disabled           Disabled          23344      k8s:id=app3                                                              fd02::12f   10.0.1.171   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testapp                                                                                        
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:13:01 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|4b8b6cad_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_with_L7_policy_Tests_NodePort_with_L7_Policy.zip]]
13:13:02 STEP: Running AfterAll block for EntireTestsuite K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy
13:13:05 STEP: Running AfterAll block for EntireTestsuite K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc)


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1818/artifact/4b8b6cad_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_with_L7_policy_Tests_NodePort_with_L7_Policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1818/artifact/test_results_Cilium-PR-K8s-1.25-kernel-4.19_1818_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19/1818/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Apr 18, 2023
@pchaigno
Copy link
Member

@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Jun 18, 2023
@nbusseneau
Copy link
Member

Duplicate of #25467

@nbusseneau nbusseneau marked this as a duplicate of #25467 Jun 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

2 participants