Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with secondary NodePort device #18072

Closed
maintainer-s-little-helper bot opened this issue Nov 30, 2021 · 8 comments
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with secondary NodePort device

Failure Output

FAIL: Request from k8s1 to service http://[fd05::11]:31872 failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Request from k8s1 to service http://[fd05::11]:31872 failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd05::11]:31872 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/2 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/5 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/8 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/10 exit code: 0
	 failed: :1941/1=7:1941/3=7:1941/4=7:1941/6=7:1941/7=7:1941/9=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:22 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:23 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:617

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 3
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Unable to install direct node route {Ifindex: 0 Dst: fd02::100/120 Src: <nil> Gw: <nil> Flags: [] Table: 0 Realm: 0}
Cilium pods: [cilium-58jgl cilium-5hgcm]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
test-k8s2-7f96d84c65-6d9cc              
testclient-8h2fq                        
testclient-mxw6s                        
testds-lvsx9                            
testds-vz6jj                            
coredns-69b675786c-8jbn5                
grafana-5747bcc8f9-xmjbt                
prometheus-655fb888d7-67l98             
Cilium agent 'cilium-58jgl': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 46 Failed 0
Cilium agent 'cilium-5hgcm': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0


Standard Error

Click to show.
06:38:44 STEP: Installing Cilium
06:38:46 STEP: Waiting for Cilium to become ready
06:39:01 STEP: Validating if Kubernetes DNS is deployed
06:39:01 STEP: Checking if deployment is ready
06:39:02 STEP: Checking if kube-dns service is plumbed correctly
06:39:02 STEP: Checking if pods have identity
06:39:02 STEP: Checking if DNS can resolve
06:39:03 STEP: Kubernetes DNS is up and operational
06:39:03 STEP: Validating Cilium Installation
06:39:03 STEP: Performing Cilium health check
06:39:03 STEP: Performing Cilium status preflight check
06:39:03 STEP: Performing Cilium controllers preflight check
06:39:05 STEP: Performing Cilium service preflight check
06:39:05 STEP: Performing K8s service preflight check
06:39:05 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-58jgl': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

06:39:05 STEP: Performing Cilium status preflight check
06:39:05 STEP: Performing Cilium controllers preflight check
06:39:05 STEP: Performing Cilium health check
06:39:07 STEP: Performing Cilium service preflight check
06:39:07 STEP: Performing K8s service preflight check
06:39:07 STEP: Performing Cilium controllers preflight check
06:39:07 STEP: Performing Cilium health check
06:39:07 STEP: Performing Cilium status preflight check
06:39:09 STEP: Performing Cilium service preflight check
06:39:09 STEP: Performing K8s service preflight check
06:39:09 STEP: Performing Cilium status preflight check
06:39:09 STEP: Performing Cilium controllers preflight check
06:39:09 STEP: Performing Cilium health check
06:39:14 STEP: Performing Cilium service preflight check
06:39:14 STEP: Performing K8s service preflight check
06:39:14 STEP: Performing Cilium controllers preflight check
06:39:14 STEP: Performing Cilium status preflight check
06:39:14 STEP: Performing Cilium health check
06:39:16 STEP: Performing Cilium service preflight check
06:39:16 STEP: Performing K8s service preflight check
06:39:18 STEP: Waiting for cilium-operator to be ready
06:39:18 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
06:39:18 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.104.165.2:10080"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.104.165.2:10069/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.57.11:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.57.11:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd03::de88]:10069/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.57.12:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:31872"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::11]:30232/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:30232/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:31518"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd05::12]:31872"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.57.12:30153/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd05::12]:30232/hello"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:31872"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd05::11]:31872"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd03::de88]:10080"
06:39:18 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd05::11]:30232/hello"
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://10.104.165.2:10080
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://192.168.56.12:31518
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://[fd04::11]:30232/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://[fd04::12]:31872
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://[fd04::11]:31872
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://[::ffff:192.168.56.11]:31518
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://[fd04::12]:30232/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://192.168.56.11:30153/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://192.168.56.11:31518
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://[::ffff:192.168.56.11]:30153/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://[::ffff:192.168.56.12]:30153/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://192.168.56.12:30153/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://[fd03::de88]:10069/hello
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://[::ffff:192.168.56.12]:31518
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service http://[fd03::de88]:10080
06:39:18 STEP: Making 10 curl requests from testclient-8h2fq pod to service tftp://10.104.165.2:10069/hello
06:39:19 STEP: Making 10 curl requests from testclient-mxw6s pod to service http://10.104.165.2:10080
06:39:19 STEP: Making 10 curl requests from testclient-mxw6s pod to service http://192.168.56.12:31518
06:39:19 STEP: Making 10 curl requests from testclient-mxw6s pod to service tftp://10.104.165.2:10069/hello
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service tftp://192.168.56.12:30153/hello
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service tftp://[::ffff:192.168.56.12]:30153/hello
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service tftp://[::ffff:192.168.56.11]:30153/hello
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service http://[::ffff:192.168.56.12]:31518
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service http://192.168.56.11:31518
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service tftp://192.168.56.11:30153/hello
06:39:20 STEP: Making 10 curl requests from testclient-mxw6s pod to service http://[::ffff:192.168.56.11]:31518
FAIL: Request from k8s1 to service http://[fd05::11]:31872 failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd05::11]:31872 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/2 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/5 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/8 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:31872
	 	user-agent=cilium-test-1941/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 1941/10 exit code: 0
	 failed: :1941/1=7:1941/3=7:1941/4=7:1941/6=7:1941/7=7:1941/9=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:22 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:23 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service http://[fd03::de88]:10080 failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd03::de88]:10080 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-8512/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8512/1 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-8512/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8512/4 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-8512/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8512/5 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-8512/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8512/6 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-8512/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8512/8 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-8512/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8512/9 exit code: 0
	 failed: :8512/2=7:8512/3=7:8512/7=7:8512/10=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:19 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:22 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:22 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service http://[fd04::11]:31872 failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:31872 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-11654/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11654/1 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-11654/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11654/2 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-11654/7
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11654/7 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-11654/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11654/9 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-11654/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11654/10 exit code: 0
	 failed: :11654/3=7:11654/4=7:11654/5=7:11654/6=7:11654/8=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:25 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:26 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:26 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service http://[fd04::12]:31872 failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::12]:31872 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-25626/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25626/1 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-25626/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25626/2 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-25626/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25626/4 exit code: 0
	 failed: :25626/3=7:25626/5=7:25626/6=7:25626/7=7:25626/8=7:25626/9=7:25626/10=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:21 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:22 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd05::11]:30232/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd05::11]:30232/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=33167
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 3134/1 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=37370
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 3134/3 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=48085
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 3134/5 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=59971
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 3134/7 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=48801
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 3134/9 exit code: 0
	 failed: :3134/2=28:3134/4=28:3134/6=28:3134/8=28:3134/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-8h2fq pod to service tftp://[fd04::11]:30232/hello failed
Expected command: kubectl exec -n default testclient-8h2fq -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30232/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=37350
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28653/1 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=47143
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28653/3 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=50833
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28653/5 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=53176
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28653/7 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=51708
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28653/10 exit code: 0
	 failed: :28653/2=28:28653/4=28:28653/6=28:28653/8=28:28653/9=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd03::de88]:10069/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd03::de88]:10069/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=37464
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14391/2 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=52305
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14391/4 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=47008
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14391/6 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=46060
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14391/8 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=59400
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14391/10 exit code: 0
	 failed: :14391/1=28:14391/3=28:14391/5=28:14391/7=28:14391/9=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-8h2fq pod to service tftp://[fd04::12]:30232/hello failed
Expected command: kubectl exec -n default testclient-8h2fq -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30232/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=48456
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 31296/2 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=37665
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 31296/4 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=38697
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 31296/6 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=57535
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 31296/8 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=38974
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 31296/10 exit code: 0
	 failed: :31296/1=28:31296/3=28:31296/5=28:31296/7=28:31296/9=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-8h2fq pod to service http://[fd04::11]:31872 failed
Expected command: kubectl exec -n default testclient-8h2fq -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:31872 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-31738/3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 31738/3 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-31738/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 31738/4 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-31738/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 31738/6 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-31738/7
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 31738/7 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:31872
	 	user-agent=cilium-test-31738/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 31738/8 exit code: 0
	 failed: :31738/1=28:31738/2=28:31738/5=28:31738/9=28:31738/10=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:30 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:30 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd04::12]:30232/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30232/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=47048
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 642/2 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=51327
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 642/4 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=57387
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 642/6 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=38720
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 642/8 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=56480
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 642/10 exit code: 0
	 failed: :642/1=28:642/3=28:642/5=28:642/7=28:642/9=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-8h2fq pod to service http://[fd03::de88]:10080 failed
Expected command: kubectl exec -n default testclient-8h2fq -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd03::de88]:10080 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-29482/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29482/1 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-29482/3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29482/3 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-29482/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29482/5 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::de88]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::de88]:10080
	 	user-agent=cilium-test-29482/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29482/6 exit code: 0
	 failed: :29482/2=28:29482/4=28:29482/7=28:29482/8=28:29482/9=28:29482/10=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:19 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:24 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:29 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:29 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from testclient-8h2fq pod to service tftp://[fd03::de88]:10069/hello failed
Expected command: kubectl exec -n default testclient-8h2fq -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd03::de88]:10069/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=54174
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14540/1 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=38073
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14540/5 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=45648
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14540/8 exit code: 0
	 
	 Hostname: testds-lvsx9
	 
	 Request Information:
	 	client_address=fd02::186
	 	client_port=54976
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14540/10 exit code: 0
	 failed: :14540/2=28:14540/3=28:14540/4=28:14540/6=28:14540/7=28:14540/9=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd04::11]:30232/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-cxd79 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30232/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=53054
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28943/2 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=48558
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28943/4 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=43645
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28943/6 exit code: 0
	 
	 Hostname: testds-vz6jj
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=44872
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 28943/8 exit code: 0
	 failed: :28943/1=28:28943/3=28:28943/5=28:28943/7=28:28943/9=28:28943/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-8h2fq pod to service http://[fd04::12]:31872 failed
Expected command: kubectl exec -n default testclient-8h2fq -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::12]:31872 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-2336/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 2336/1 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-2336/3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 2336/3 exit code: 0
	 
	 
	 Hostname: testds-lvsx9
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-2336/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 2336/6 exit code: 0
	 
	 
	 Hostname: testds-vz6jj
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::186
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:31872
	 	user-agent=cilium-test-2336/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 2336/10 exit code: 0
	 failed: :2336/2=28:2336/4=28:2336/5=28:2336/7=28:2336/8=28:2336/9=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:19 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:24 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:34 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Tue, 30 Nov 2021 06:39:52 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

=== Test Finished at 2021-11-30T06:39:52Z====
06:39:52 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
06:39:52 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-xmjbt           1/1     Running   0          54m     10.0.0.109      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-67l98        1/1     Running   0          54m     10.0.0.244      k8s1   <none>           <none>
	 default             test-k8s2-7f96d84c65-6d9cc         2/2     Running   0          5m49s   10.0.1.16       k8s2   <none>           <none>
	 default             testclient-8h2fq                   1/1     Running   0          5m49s   10.0.1.67       k8s2   <none>           <none>
	 default             testclient-mxw6s                   1/1     Running   0          5m49s   10.0.0.234      k8s1   <none>           <none>
	 default             testds-lvsx9                       2/2     Running   0          5m16s   10.0.1.169      k8s2   <none>           <none>
	 default             testds-vz6jj                       2/2     Running   0          5m49s   10.0.0.35       k8s1   <none>           <none>
	 kube-system         cilium-58jgl                       1/1     Running   0          68s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-5hgcm                       1/1     Running   0          67s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-7cc5c5874b-wvpc5   1/1     Running   0          67s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-7cc5c5874b-zmqnt   1/1     Running   0          67s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-69b675786c-8jbn5           1/1     Running   0          8m17s   10.0.0.254      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          57m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          57m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          57m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-49zsm                   1/1     Running   0          57m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-jm25d                   1/1     Running   0          55m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          57m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-cxd79                 1/1     Running   0          54m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-d9d2t                 1/1     Running   0          54m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-rq2kl               1/1     Running   0          55m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-rs55m               1/1     Running   0          55m     192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-58jgl cilium-5hgcm]
cmd: kubectl exec -n kube-system cilium-58jgl -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID    Frontend                Service Type   Backend                   
	 1     10.96.0.1:443           ClusterIP      1 => 192.168.56.11:6443   
	 2     10.96.0.10:53           ClusterIP      1 => 10.0.0.254:53        
	 3     10.96.0.10:9153         ClusterIP      1 => 10.0.0.254:9153      
	 4     10.103.43.76:3000       ClusterIP      1 => 10.0.0.109:3000      
	 5     10.109.141.53:9090      ClusterIP      1 => 10.0.0.244:9090      
	 6     10.104.207.224:80       ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 7     10.104.207.224:69       ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 8     10.104.165.2:10080      ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 9     10.104.165.2:10069      ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 10    0.0.0.0:31518           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 12    192.168.56.11:31518     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 14    192.168.56.11:30153     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 15    0.0.0.0:30153           NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 16    10.107.255.218:10069    ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 17    10.107.255.218:10080    ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 18    0.0.0.0:32687           NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 19    192.168.56.11:32687     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 21    192.168.56.11:31932     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 22    0.0.0.0:31932           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 24    10.111.252.112:10080    ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 25    10.111.252.112:10069    ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 28    192.168.56.11:32297     NodePort       1 => 10.0.0.35:80         
	 29    192.168.56.11:32297/i   NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 30    0.0.0.0:32297           NodePort       1 => 10.0.0.35:80         
	 31    0.0.0.0:32297/i         NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 32    192.168.56.11:32493     NodePort       1 => 10.0.0.35:69         
	 33    192.168.56.11:32493/i   NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 34    0.0.0.0:32493           NodePort       1 => 10.0.0.35:69         
	 35    0.0.0.0:32493/i         NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 38    10.102.192.15:10080     ClusterIP      1 => 10.0.1.16:80         
	 39    10.102.192.15:10069     ClusterIP      1 => 10.0.1.16:69         
	 42    192.168.56.11:30751     NodePort                                 
	 43    192.168.56.11:30751/i   NodePort       1 => 10.0.1.16:80         
	 44    0.0.0.0:30751           NodePort                                 
	 45    0.0.0.0:30751/i         NodePort       1 => 10.0.1.16:80         
	 48    192.168.56.11:31096     NodePort                                 
	 49    192.168.56.11:31096/i   NodePort       1 => 10.0.1.16:69         
	 50    0.0.0.0:31096           NodePort                                 
	 51    0.0.0.0:31096/i         NodePort       1 => 10.0.1.16:69         
	 52    10.106.42.186:10080     ClusterIP      1 => 10.0.1.16:80         
	 53    10.106.42.186:10069     ClusterIP      1 => 10.0.1.16:69         
	 55    192.168.56.11:31020     NodePort       1 => 10.0.1.16:80         
	 56    0.0.0.0:31020           NodePort       1 => 10.0.1.16:80         
	 57    0.0.0.0:32614           NodePort       1 => 10.0.1.16:69         
	 59    192.168.56.11:32614     NodePort       1 => 10.0.1.16:69         
	 60    10.109.58.41:80         ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 62    192.168.56.11:32005     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 63    0.0.0.0:32005           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 64    10.103.160.245:80       ClusterIP      1 => 10.0.1.16:80         
	 67    192.168.56.11:32339     NodePort                                 
	 68    192.168.56.11:32339/i   NodePort       1 => 10.0.1.16:80         
	 69    0.0.0.0:32339           NodePort                                 
	 70    0.0.0.0:32339/i         NodePort       1 => 10.0.1.16:80         
	 71    10.102.11.162:20080     ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 72    10.102.11.162:20069     ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 73    192.0.2.233:20080       ExternalIPs    1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 74    192.0.2.233:20069       ExternalIPs    1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 75    192.168.56.11:31936     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 76    0.0.0.0:31936           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 79    192.168.56.11:32721     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 80    0.0.0.0:32721           NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 81    [fd03::9442]:69         ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 82    [fd03::9442]:80         ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 83    [fd03::de88]:10080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 84    [fd03::de88]:10069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 85    [fd04::11]:31872        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 86    [::]:31872              NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 87    [fd04::11]:30232        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 88    [::]:30232              NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 89    [fd03::c860]:10080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 90    [fd03::c860]:10069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 91    [fd04::11]:31135        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 92    [::]:31135              NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 93    [fd04::11]:30499        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 94    [::]:30499              NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 95    [fd03::9a8e]:10080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 96    [fd03::9a8e]:10069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 97    [fd04::11]:30530        NodePort       1 => [fd02::81]:80        
	 98    [fd04::11]:30530/i      NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 99    [::]:30530              NodePort       1 => [fd02::81]:80        
	 100   [::]:30530/i            NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 101   [fd04::11]:32677        NodePort       1 => [fd02::81]:69        
	 102   [fd04::11]:32677/i      NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 103   [::]:32677              NodePort       1 => [fd02::81]:69        
	 104   [::]:32677/i            NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 105   [fd03::8a2a]:10080      ClusterIP      1 => [fd02::10c]:80       
	 106   [fd03::8a2a]:10069      ClusterIP      1 => [fd02::10c]:69       
	 107   [fd04::11]:30169        NodePort                                 
	 108   [fd04::11]:30169/i      NodePort       1 => [fd02::10c]:80       
	 109   [::]:30169              NodePort                                 
	 110   [::]:30169/i            NodePort       1 => [fd02::10c]:80       
	 111   [fd04::11]:30931        NodePort                                 
	 112   [fd04::11]:30931/i      NodePort       1 => [fd02::10c]:69       
	 113   [::]:30931              NodePort                                 
	 114   [::]:30931/i            NodePort       1 => [fd02::10c]:69       
	 115   [fd03::ee46]:10080      ClusterIP      1 => [fd02::10c]:80       
	 116   [fd03::ee46]:10069      ClusterIP      1 => [fd02::10c]:69       
	 117   [fd04::11]:31443        NodePort       1 => [fd02::10c]:80       
	 118   [::]:31443              NodePort       1 => [fd02::10c]:80       
	 119   [fd04::11]:30358        NodePort       1 => [fd02::10c]:69       
	 120   [::]:30358              NodePort       1 => [fd02::10c]:69       
	 121   [fd03::b7e1]:20080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 122   [fd03::b7e1]:20069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 123   [fd03::999]:20069       ExternalIPs    1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 124   [fd03::999]:20080       ExternalIPs    1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 125   [fd04::11]:31261        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 126   [::]:31261              NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 127   [fd04::11]:30615        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 128   [::]:30615              NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 129   [fd05::11]:31443        NodePort       1 => [fd02::10c]:80       
	 130   [fd05::11]:30358        NodePort       1 => [fd02::10c]:69       
	 131   192.168.57.11:32005     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 132   [fd05::11]:30530        NodePort       1 => [fd02::81]:80        
	 133   [fd05::11]:30530/i      NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 134   [fd05::11]:32677        NodePort       1 => [fd02::81]:69        
	 135   [fd05::11]:32677/i      NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 136   192.168.57.11:32493     NodePort       1 => 10.0.0.35:69         
	 137   192.168.57.11:32493/i   NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 138   192.168.57.11:32297     NodePort       1 => 10.0.0.35:80         
	 139   192.168.57.11:32297/i   NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 140   192.168.57.11:30751     NodePort                                 
	 141   192.168.57.11:30751/i   NodePort       1 => 10.0.1.16:80         
	 142   192.168.57.11:31096     NodePort                                 
	 143   192.168.57.11:31096/i   NodePort       1 => 10.0.1.16:69         
	 144   192.168.57.11:31020     NodePort       1 => 10.0.1.16:80         
	 145   192.168.57.11:32614     NodePort       1 => 10.0.1.16:69         
	 146   192.168.57.11:32339     NodePort                                 
	 147   192.168.57.11:32339/i   NodePort       1 => 10.0.1.16:80         
	 148   192.168.57.11:31936     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 149   192.168.57.11:32721     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 150   192.168.57.11:31518     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 151   192.168.57.11:30153     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 152   [fd05::11]:31872        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 153   [fd05::11]:30232        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 154   [fd05::11]:30169        NodePort                                 
	 155   [fd05::11]:30169/i      NodePort       1 => [fd02::10c]:80       
	 156   [fd05::11]:30931        NodePort                                 
	 157   [fd05::11]:30931/i      NodePort       1 => [fd02::10c]:69       
	 158   [fd05::11]:31261        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 159   [fd05::11]:30615        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 160   192.168.57.11:31932     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 161   192.168.57.11:32687     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 162   [fd05::11]:31135        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 163   [fd05::11]:30499        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-58jgl -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                             
	 25         Disabled           Disabled          27962      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::a7   10.0.0.234   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testDSClient                                                                                            
	 248        Disabled           Disabled          1891       k8s:app=grafana                                                                    fd02::20   10.0.0.109   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 315        Disabled           Disabled          18709      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system         fd02::56   10.0.0.254   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                               
	 505        Disabled           Disabled          18268      k8s:app=prometheus                                                                 fd02::6f   10.0.0.244   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 1020       Disabled           Disabled          4          reserved:health                                                                    fd02::59   10.0.0.233   ready   
	 2601       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                          
	                                                            k8s:node-role.kubernetes.io/master                                                                                 
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                        
	                                                            reserved:host                                                                                                      
	 3307       Disabled           Disabled          51885      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::81   10.0.0.35    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testDS                                                                                                  
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5hgcm -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID    Frontend                Service Type   Backend                   
	 1     10.96.0.1:443           ClusterIP      1 => 192.168.56.11:6443   
	 2     10.96.0.10:53           ClusterIP      1 => 10.0.0.254:53        
	 3     10.96.0.10:9153         ClusterIP      1 => 10.0.0.254:9153      
	 4     10.103.43.76:3000       ClusterIP      1 => 10.0.0.109:3000      
	 5     10.109.141.53:9090      ClusterIP      1 => 10.0.0.244:9090      
	 6     10.104.207.224:80       ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 7     10.104.207.224:69       ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 8     10.104.165.2:10080      ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 9     10.104.165.2:10069      ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 10    192.168.56.12:31518     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 11    0.0.0.0:31518           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 14    192.168.56.12:30153     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 15    0.0.0.0:30153           NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 16    10.107.255.218:10080    ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 17    10.107.255.218:10069    ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 19    192.168.56.12:31932     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 20    0.0.0.0:31932           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 21    0.0.0.0:32687           NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 22    192.168.56.12:32687     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 24    10.111.252.112:10080    ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 25    10.111.252.112:10069    ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 26    192.168.56.12:32297     NodePort       1 => 10.0.1.169:80        
	 27    192.168.56.12:32297/i   NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 28    0.0.0.0:32297           NodePort       1 => 10.0.1.169:80        
	 29    0.0.0.0:32297/i         NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 34    192.168.56.12:32493     NodePort       1 => 10.0.1.169:69        
	 35    192.168.56.12:32493/i   NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 36    0.0.0.0:32493           NodePort       1 => 10.0.1.169:69        
	 37    0.0.0.0:32493/i         NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 38    10.102.192.15:10080     ClusterIP      1 => 10.0.1.16:80         
	 39    10.102.192.15:10069     ClusterIP      1 => 10.0.1.16:69         
	 42    192.168.56.12:30751     NodePort       1 => 10.0.1.16:80         
	 43    192.168.56.12:30751/i   NodePort       1 => 10.0.1.16:80         
	 44    0.0.0.0:30751           NodePort       1 => 10.0.1.16:80         
	 45    0.0.0.0:30751/i         NodePort       1 => 10.0.1.16:80         
	 48    192.168.56.12:31096     NodePort       1 => 10.0.1.16:69         
	 49    192.168.56.12:31096/i   NodePort       1 => 10.0.1.16:69         
	 50    0.0.0.0:31096           NodePort       1 => 10.0.1.16:69         
	 51    0.0.0.0:31096/i         NodePort       1 => 10.0.1.16:69         
	 52    10.106.42.186:10080     ClusterIP      1 => 10.0.1.16:80         
	 53    10.106.42.186:10069     ClusterIP      1 => 10.0.1.16:69         
	 55    192.168.56.12:31020     NodePort       1 => 10.0.1.16:80         
	 56    0.0.0.0:31020           NodePort       1 => 10.0.1.16:80         
	 58    192.168.56.12:32614     NodePort       1 => 10.0.1.16:69         
	 59    0.0.0.0:32614           NodePort       1 => 10.0.1.16:69         
	 60    10.109.58.41:80         ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 62    192.168.56.12:32005     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 63    0.0.0.0:32005           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 64    10.103.160.245:80       ClusterIP      1 => 10.0.1.16:80         
	 67    192.168.56.12:32339     NodePort       1 => 10.0.1.16:80         
	 68    192.168.56.12:32339/i   NodePort       1 => 10.0.1.16:80         
	 69    0.0.0.0:32339           NodePort       1 => 10.0.1.16:80         
	 70    0.0.0.0:32339/i         NodePort       1 => 10.0.1.16:80         
	 71    10.102.11.162:20069     ClusterIP      1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 72    10.102.11.162:20080     ClusterIP      1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 73    192.0.2.233:20069       ExternalIPs    1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 74    192.0.2.233:20080       ExternalIPs    1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 76    192.168.56.12:32721     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 77    0.0.0.0:32721           NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 79    192.168.56.12:31936     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 80    0.0.0.0:31936           NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 81    [fd03::9442]:80         ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 82    [fd03::9442]:69         ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 83    [fd03::de88]:10080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 84    [fd03::de88]:10069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 85    [fd04::12]:31872        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 86    [::]:31872              NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 87    [fd04::12]:30232        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 88    [::]:30232              NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 89    [fd03::c860]:10069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 90    [fd03::c860]:10080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 91    [fd04::12]:30499        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 92    [::]:30499              NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 93    [fd04::12]:31135        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 94    [::]:31135              NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 95    [fd03::9a8e]:10080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 96    [fd03::9a8e]:10069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 97    [fd04::12]:30530        NodePort       1 => [fd02::1bb]:80       
	 98    [fd04::12]:30530/i      NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 99    [::]:30530              NodePort       1 => [fd02::1bb]:80       
	 100   [::]:30530/i            NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 101   [fd04::12]:32677        NodePort       1 => [fd02::1bb]:69       
	 102   [fd04::12]:32677/i      NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 103   [::]:32677              NodePort       1 => [fd02::1bb]:69       
	 104   [::]:32677/i            NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 105   [fd03::8a2a]:10080      ClusterIP      1 => [fd02::10c]:80       
	 106   [fd03::8a2a]:10069      ClusterIP      1 => [fd02::10c]:69       
	 107   [fd04::12]:30169        NodePort       1 => [fd02::10c]:80       
	 108   [fd04::12]:30169/i      NodePort       1 => [fd02::10c]:80       
	 109   [::]:30169              NodePort       1 => [fd02::10c]:80       
	 110   [::]:30169/i            NodePort       1 => [fd02::10c]:80       
	 111   [fd04::12]:30931        NodePort       1 => [fd02::10c]:69       
	 112   [fd04::12]:30931/i      NodePort       1 => [fd02::10c]:69       
	 113   [::]:30931              NodePort       1 => [fd02::10c]:69       
	 114   [::]:30931/i            NodePort       1 => [fd02::10c]:69       
	 115   [fd03::ee46]:10080      ClusterIP      1 => [fd02::10c]:80       
	 116   [fd03::ee46]:10069      ClusterIP      1 => [fd02::10c]:69       
	 117   [fd04::12]:31443        NodePort       1 => [fd02::10c]:80       
	 118   [::]:31443              NodePort       1 => [fd02::10c]:80       
	 119   [fd04::12]:30358        NodePort       1 => [fd02::10c]:69       
	 120   [::]:30358              NodePort       1 => [fd02::10c]:69       
	 121   [fd03::b7e1]:20080      ClusterIP      1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 122   [fd03::b7e1]:20069      ClusterIP      1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 123   [fd03::999]:20069       ExternalIPs    1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 124   [fd03::999]:20080       ExternalIPs    1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 125   [fd04::12]:30615        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 126   [::]:30615              NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 127   [fd04::12]:31261        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 128   [::]:31261              NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 130   192.168.56.12:8080      HostPort       1 => 10.0.1.16:80         
	 131   0.0.0.0:8080            HostPort       1 => 10.0.1.16:80         
	 132   [fd04::12]:8080         HostPort       1 => [fd02::10c]:80       
	 133   [::]:8080               HostPort       1 => [fd02::10c]:80       
	 135   192.168.56.12:6969      HostPort       1 => 10.0.1.16:69         
	 136   0.0.0.0:6969            HostPort       1 => 10.0.1.16:69         
	 137   [fd04::12]:6969         HostPort       1 => [fd02::10c]:69       
	 138   [::]:6969               HostPort       1 => [fd02::10c]:69       
	 139   192.168.57.12:8080      HostPort       1 => 10.0.1.16:80         
	 140   [fd05::12]:8080         HostPort       1 => [fd02::10c]:80       
	 141   192.168.57.12:6969      HostPort       1 => 10.0.1.16:69         
	 142   [fd05::12]:6969         HostPort       1 => [fd02::10c]:69       
	 143   192.168.57.12:32614     NodePort       1 => 10.0.1.16:69         
	 144   192.168.57.12:31020     NodePort       1 => 10.0.1.16:80         
	 145   192.168.57.12:31936     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 146   192.168.57.12:32721     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 147   [fd05::12]:31135        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 148   [fd05::12]:30499        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 149   192.168.57.12:32005     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 150   [fd05::12]:30169        NodePort       1 => [fd02::10c]:80       
	 151   [fd05::12]:30169/i      NodePort       1 => [fd02::10c]:80       
	 152   [fd05::12]:30931        NodePort       1 => [fd02::10c]:69       
	 153   [fd05::12]:30931/i      NodePort       1 => [fd02::10c]:69       
	 154   192.168.57.12:31518     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 155   192.168.57.12:30153     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 156   192.168.57.12:30751     NodePort       1 => 10.0.1.16:80         
	 157   192.168.57.12:30751/i   NodePort       1 => 10.0.1.16:80         
	 158   192.168.57.12:31096     NodePort       1 => 10.0.1.16:69         
	 159   192.168.57.12:31096/i   NodePort       1 => 10.0.1.16:69         
	 160   [fd05::12]:31872        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 161   [fd05::12]:30232        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 162   [fd05::12]:30530        NodePort       1 => [fd02::1bb]:80       
	 163   [fd05::12]:30530/i      NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 164   [fd05::12]:32677        NodePort       1 => [fd02::1bb]:69       
	 165   [fd05::12]:32677/i      NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 166   [fd05::12]:31443        NodePort       1 => [fd02::10c]:80       
	 167   [fd05::12]:30358        NodePort       1 => [fd02::10c]:69       
	 168   [fd05::12]:31261        NodePort       1 => [fd02::81]:80        
	                                              2 => [fd02::1bb]:80       
	 169   [fd05::12]:30615        NodePort       1 => [fd02::81]:69        
	                                              2 => [fd02::1bb]:69       
	 170   192.168.57.12:31932     NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 171   192.168.57.12:32687     NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 172   192.168.57.12:32297     NodePort       1 => 10.0.1.169:80        
	 173   192.168.57.12:32297/i   NodePort       1 => 10.0.0.35:80         
	                                              2 => 10.0.1.169:80        
	 174   192.168.57.12:32493     NodePort       1 => 10.0.1.169:69        
	 175   192.168.57.12:32493/i   NodePort       1 => 10.0.0.35:69         
	                                              2 => 10.0.1.169:69        
	 176   192.168.57.12:32339     NodePort       1 => 10.0.1.16:80         
	 177   192.168.57.12:32339/i   NodePort       1 => 10.0.1.16:80         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5hgcm -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                    
	 270        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                        ready   
	                                                            reserved:host                                                                                             
	 376        Disabled           Disabled          27962      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::186   10.0.1.67    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                   
	 756        Disabled           Disabled          51885      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1bb   10.0.1.169   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDS                                                                                         
	 1060       Disabled           Disabled          35563      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::10c   10.0.1.16    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                      
	 2766       Disabled           Disabled          4          reserved:health                                                          fd02::1cc   10.0.1.23    ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
06:41:03 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
06:41:03 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|5b7cdc3d_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_secondary_NodePort_device.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//26/artifact/12ff0200_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//26/artifact/5b7cdc3d_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_secondary_NodePort_device.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//26/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.19_26_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19/26/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Nov 30, 2021
@joestringer joestringer added this to To quarantine in 1.11 CI via automation Dec 1, 2021
@joestringer
Copy link
Member

Note that this is an IPv6-specific failure that likely has more in common with other recent IPv6 failures on master (eg #18014) rather than other IPv4 failures of the same test (eg #12511).

This test is also skipped in all cloud builds and when there are 3 nodes in the environment (already because of quarantine from flakiness raised in #12511).

@maintainer-s-little-helper
Copy link
Author

PR #18087 hit this flake with 86.94% similarity:

Click to show.

Test Name

K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with secondary NodePort device

Failure Output

FAIL: Request from k8s1 to service http://[fd05::11]:32271 failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Request from k8s1 to service http://[fd05::11]:32271 failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd05::11]:32271 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/2 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/4 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/6 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/8 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/9 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/10 exit code: 0
	 failed: :8631/1=7:8631/3=7:8631/5=7:8631/7=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:37 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:38 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:617

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 3
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Unable to install direct node route {Ifindex: 0 Dst: fd02::100/120 Src: <nil> Gw: <nil> Flags: [] Table: 0 Realm: 0}
Cilium pods: [cilium-s88p7 cilium-w6854]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-76f25                        
testclient-n42q6                        
testds-7svvf                            
testds-g2tt4                            
coredns-69b675786c-nlm9h                
grafana-5747bcc8f9-jjp2v                
prometheus-655fb888d7-zfj28             
test-k8s2-7f96d84c65-hfddt              
Cilium agent 'cilium-s88p7': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 46 Failed 0
Cilium agent 'cilium-w6854': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0


Standard Error

Click to show.
01:47:01 STEP: Installing Cilium
01:47:03 STEP: Waiting for Cilium to become ready
01:47:16 STEP: Validating if Kubernetes DNS is deployed
01:47:16 STEP: Checking if deployment is ready
01:47:16 STEP: Checking if kube-dns service is plumbed correctly
01:47:16 STEP: Checking if DNS can resolve
01:47:16 STEP: Checking if pods have identity
01:47:17 STEP: Kubernetes DNS is up and operational
01:47:17 STEP: Validating Cilium Installation
01:47:17 STEP: Performing Cilium controllers preflight check
01:47:17 STEP: Performing Cilium status preflight check
01:47:17 STEP: Performing Cilium health check
01:47:19 STEP: Performing Cilium service preflight check
01:47:19 STEP: Performing K8s service preflight check
01:47:19 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-s88p7': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

01:47:19 STEP: Performing Cilium status preflight check
01:47:19 STEP: Performing Cilium health check
01:47:19 STEP: Performing Cilium controllers preflight check
01:47:21 STEP: Performing Cilium service preflight check
01:47:21 STEP: Performing K8s service preflight check
01:47:21 STEP: Performing Cilium status preflight check
01:47:21 STEP: Performing Cilium controllers preflight check
01:47:21 STEP: Performing Cilium health check
01:47:22 STEP: Performing Cilium service preflight check
01:47:22 STEP: Performing K8s service preflight check
01:47:22 STEP: Performing Cilium health check
01:47:22 STEP: Performing Cilium status preflight check
01:47:22 STEP: Performing Cilium controllers preflight check
01:47:24 STEP: Performing Cilium service preflight check
01:47:24 STEP: Performing K8s service preflight check
01:47:24 STEP: Performing Cilium status preflight check
01:47:24 STEP: Performing Cilium controllers preflight check
01:47:24 STEP: Performing Cilium health check
01:47:29 STEP: Performing Cilium service preflight check
01:47:29 STEP: Performing K8s service preflight check
01:47:29 STEP: Performing Cilium health check
01:47:29 STEP: Performing Cilium status preflight check
01:47:29 STEP: Performing Cilium controllers preflight check
01:47:30 STEP: Performing Cilium service preflight check
01:47:30 STEP: Performing K8s service preflight check
01:47:32 STEP: Waiting for cilium-operator to be ready
01:47:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
01:47:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd05::12]:30326/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.105.143.95:10069/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.105.143.95:10080"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:32271"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.57.12:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.57.12:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd05::11]:30326/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd05::12]:32271"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd05::11]:32271"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.57.11:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:31662"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd03::9dd0]:10069/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:30326/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.57.11:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:30099/hello"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd03::9dd0]:10080"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:32271"
01:47:32 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::11]:30326/hello"
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://[::ffff:192.168.56.12]:30099/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://10.105.143.95:10080
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://192.168.56.12:30099/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://[fd03::9dd0]:10080
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://[fd03::9dd0]:10069/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://[::ffff:192.168.56.12]:31662
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://192.168.56.11:31662
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://[fd04::11]:32271
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://192.168.56.11:30099/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://10.105.143.95:10069/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://[fd04::11]:30326/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://[fd04::12]:30326/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://[fd04::12]:32271
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://[::ffff:192.168.56.11]:31662
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service tftp://[::ffff:192.168.56.11]:30099/hello
01:47:33 STEP: Making 10 curl requests from testclient-76f25 pod to service http://192.168.56.12:31662
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service tftp://[::ffff:192.168.56.12]:30099/hello
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service http://192.168.56.11:31662
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service http://10.105.143.95:10080
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service http://[::ffff:192.168.56.12]:31662
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service tftp://192.168.56.12:30099/hello
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service tftp://192.168.56.11:30099/hello
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service http://192.168.56.12:31662
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service tftp://[::ffff:192.168.56.11]:30099/hello
01:47:34 STEP: Making 10 curl requests from testclient-n42q6 pod to service tftp://10.105.143.95:10069/hello
01:47:35 STEP: Making 10 curl requests from testclient-n42q6 pod to service http://[::ffff:192.168.56.11]:31662
FAIL: Request from k8s1 to service http://[fd05::11]:32271 failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd05::11]:32271 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/2 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/4 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/6 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/8 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/9 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd05::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd05::11]:32271
	 	user-agent=cilium-test-8631/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 8631/10 exit code: 0
	 failed: :8631/1=7:8631/3=7:8631/5=7:8631/7=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:37 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:38 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service http://[fd04::12]:32271 failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::12]:32271 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-25170/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25170/4 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-25170/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25170/5 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-25170/6
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25170/6 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-25170/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25170/8 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-25170/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 25170/9 exit code: 0
	 failed: :25170/1=7:25170/2=7:25170/3=7:25170/7=7:25170/10=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:38 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:38 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:38 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service http://[fd03::9dd0]:10080 failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd03::9dd0]:10080 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-587/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 587/1 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-587/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 587/2 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-587/3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 587/3 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-587/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 587/5 exit code: 0
	 failed: :587/4=7:587/6=7:587/7=7:587/8=7:587/9=7:587/10=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:36 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service http://[fd04::11]:32271 failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:32271 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:32271
	 	user-agent=cilium-test-26661/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 26661/2 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:32271
	 	user-agent=cilium-test-26661/3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 26661/3 exit code: 0
	 
	 
	 Hostname: testds-7svvf
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd04::11
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:32271
	 	user-agent=cilium-test-26661/8
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 26661/8 exit code: 0
	 failed: :26661/1=7:26661/4=7:26661/5=7:26661/6=7:26661/7=7:26661/9=7:26661/10=7
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from testclient-76f25 pod to service http://[fd03::9dd0]:10080 failed
Expected command: kubectl exec -n default testclient-76f25 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd03::9dd0]:10080 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-16373/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 16373/1 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-16373/2
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 16373/2 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-16373/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 16373/4 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-16373/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 16373/5 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-16373/7
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 16373/7 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd03::9dd0]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd03::9dd0]:10080
	 	user-agent=cilium-test-16373/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 16373/9 exit code: 0
	 failed: :16373/3=28:16373/6=28:16373/8=28:16373/10=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:34 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:34 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:39 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:44 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:49 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from testclient-76f25 pod to service http://[fd04::12]:32271 failed
Expected command: kubectl exec -n default testclient-76f25 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::12]:32271 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-24218/1
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 24218/1 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-24218/3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 24218/3 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-24218/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 24218/4 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-24218/5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 24218/5 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::12]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::12]:32271
	 	user-agent=cilium-test-24218/9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 24218/9 exit code: 0
	 failed: :24218/2=28:24218/6=28:24218/7=28:24218/8=28:24218/10=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:35 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:40 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:40 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:40 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:55 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from testclient-76f25 pod to service tftp://[fd03::9dd0]:10069/hello failed
Expected command: kubectl exec -n default testclient-76f25 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd03::9dd0]:10069/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=48593
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 18730/1 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=51876
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 18730/3 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=55150
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 18730/5 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=55796
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 18730/7 exit code: 0
	 failed: :18730/2=28:18730/4=28:18730/6=28:18730/8=28:18730/9=28:18730/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-76f25 pod to service tftp://[fd04::12]:30326/hello failed
Expected command: kubectl exec -n default testclient-76f25 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30326/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=45188
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 2792/3 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=60892
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 2792/5 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=37764
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 2792/7 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=43459
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 2792/9 exit code: 0
	 failed: :2792/1=28:2792/2=28:2792/4=28:2792/6=28:2792/8=28:2792/10=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-76f25 pod to service tftp://[fd04::11]:30326/hello failed
Expected command: kubectl exec -n default testclient-76f25 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30326/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=38128
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 7022/1 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=51301
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 7022/7 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=46955
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 7022/9 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	client_port=34590
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 7022/10 exit code: 0
	 failed: :7022/2=28:7022/3=28:7022/4=28:7022/5=28:7022/6=28:7022/8=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd04::11]:30326/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30326/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=43394
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20746/1 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=46227
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20746/3 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=39856
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20746/9 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=39035
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20746/10 exit code: 0
	 failed: :20746/2=28:20746/4=28:20746/5=28:20746/6=28:20746/7=28:20746/8=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd05::11]:30326/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd05::11]:30326/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=34342
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20575/1 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=36290
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20575/3 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=59638
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20575/9 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=51262
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 20575/10 exit code: 0
	 failed: :20575/2=28:20575/4=28:20575/5=28:20575/6=28:20575/7=28:20575/8=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from testclient-76f25 pod to service http://[fd04::11]:32271 failed
Expected command: kubectl exec -n default testclient-76f25 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:32271 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:32271
	 	user-agent=cilium-test-29963/4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29963/4 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:32271
	 	user-agent=cilium-test-29963/7
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29963/7 exit code: 0
	 
	 
	 Hostname: testds-g2tt4
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=fd02::1f0
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://[fd04::11]:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=[fd04::11]:32271
	 	user-agent=cilium-test-29963/10
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 29963/10 exit code: 0
	 failed: :29963/1=28:29963/2=28:29963/3=28:29963/5=28:29963/6=28:29963/8=28:29963/9=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:48 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:47:58 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 HTTP/1.1 200 OK
	 Date: Thu, 02 Dec 2021 01:48:08 GMT
	 Content-Type: text/plain
	 Transfer-Encoding: chunked
	 Connection: keep-alive
	 Server: echoserver
	 
	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd03::9dd0]:10069/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd03::9dd0]:10069/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=38466
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 16974/1 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=35299
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 16974/8 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=59782
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 16974/9 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=42856
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 16974/10 exit code: 0
	 failed: :16974/2=28:16974/3=28:16974/4=28:16974/5=28:16974/6=28:16974/7=28
	 
Stderr:
 	 command terminated with exit code 42
	 

FAIL: Request from k8s1 to service tftp://[fd04::12]:30326/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-m86n8 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30326/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=49078
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 25280/1 exit code: 0
	 
	 Hostname: testds-7svvf
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=52440
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 25280/3 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=45657
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 25280/9 exit code: 0
	 
	 Hostname: testds-g2tt4
	 
	 Request Information:
	 	client_address=fd04::11
	 	client_port=34055
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 25280/10 exit code: 0
	 failed: :25280/2=28:25280/4=28:25280/5=28:25280/6=28:25280/7=28:25280/8=28
	 
Stderr:
 	 command terminated with exit code 42
	 

=== Test Finished at 2021-12-02T01:48:09Z====
01:48:09 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
01:48:10 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-jjp2v           1/1     Running   0          27m     10.0.1.234      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-zfj28        1/1     Running   0          27m     10.0.1.94       k8s2   <none>           <none>
	 default             test-k8s2-7f96d84c65-hfddt         2/2     Running   0          5m18s   10.0.1.154      k8s2   <none>           <none>
	 default             testclient-76f25                   1/1     Running   0          5m18s   10.0.1.245      k8s2   <none>           <none>
	 default             testclient-n42q6                   1/1     Running   0          5m18s   10.0.0.14       k8s1   <none>           <none>
	 default             testds-7svvf                       2/2     Running   0          4m54s   10.0.0.144      k8s1   <none>           <none>
	 default             testds-g2tt4                       2/2     Running   0          4m49s   10.0.1.88       k8s2   <none>           <none>
	 kube-system         cilium-operator-588bf6db97-krncp   1/1     Running   0          67s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-588bf6db97-zf9xf   1/1     Running   0          67s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-s88p7                       1/1     Running   0          67s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-w6854                       1/1     Running   0          67s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-69b675786c-nlm9h           1/1     Running   0          7m41s   10.0.0.179      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          30m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          30m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          30m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-7bv8m                   1/1     Running   0          29m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-j94wg                   1/1     Running   0          28m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          30m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-gzjzn                 1/1     Running   0          27m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-m86n8                 1/1     Running   0          27m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-4pjpg               1/1     Running   0          28m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-l57kl               1/1     Running   0          28m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-s88p7 cilium-w6854]
cmd: kubectl exec -n kube-system cilium-s88p7 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID    Frontend                Service Type   Backend                   
	 1     10.107.57.156:9090      ClusterIP      1 => 10.0.1.94:9090       
	 2     10.96.0.1:443           ClusterIP      1 => 192.168.56.11:6443   
	 3     10.96.0.10:53           ClusterIP      1 => 10.0.0.179:53        
	 4     10.96.0.10:9153         ClusterIP      1 => 10.0.0.179:9153      
	 5     10.111.240.13:3000      ClusterIP      1 => 10.0.1.234:3000      
	 6     10.107.100.153:80       ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 7     10.107.100.153:69       ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 8     10.105.143.95:10080     ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 9     10.105.143.95:10069     ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 11    192.168.56.12:31662     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 12    0.0.0.0:31662           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 14    192.168.56.12:30099     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 15    0.0.0.0:30099           NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 16    10.96.69.228:10080      ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 17    10.96.69.228:10069      ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 18    0.0.0.0:31761           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 20    192.168.56.12:31761     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 21    192.168.56.12:32115     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 23    0.0.0.0:32115           NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 24    10.106.67.0:10080       ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 25    10.106.67.0:10069       ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 26    0.0.0.0:31878           NodePort       1 => 10.0.1.88:80         
	 27    0.0.0.0:31878/i         NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 30    192.168.56.12:31878     NodePort       1 => 10.0.1.88:80         
	 31    192.168.56.12:31878/i   NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 34    192.168.56.12:31250     NodePort       1 => 10.0.1.88:69         
	 35    192.168.56.12:31250/i   NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 36    0.0.0.0:31250           NodePort       1 => 10.0.1.88:69         
	 37    0.0.0.0:31250/i         NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 38    10.110.57.30:10080      ClusterIP      1 => 10.0.1.154:80        
	 39    10.110.57.30:10069      ClusterIP      1 => 10.0.1.154:69        
	 42    192.168.56.12:30608     NodePort       1 => 10.0.1.154:80        
	 43    192.168.56.12:30608/i   NodePort       1 => 10.0.1.154:80        
	 44    0.0.0.0:30608           NodePort       1 => 10.0.1.154:80        
	 45    0.0.0.0:30608/i         NodePort       1 => 10.0.1.154:80        
	 46    192.168.56.12:31501     NodePort       1 => 10.0.1.154:69        
	 47    192.168.56.12:31501/i   NodePort       1 => 10.0.1.154:69        
	 48    0.0.0.0:31501           NodePort       1 => 10.0.1.154:69        
	 49    0.0.0.0:31501/i         NodePort       1 => 10.0.1.154:69        
	 52    10.104.204.113:10080    ClusterIP      1 => 10.0.1.154:80        
	 53    10.104.204.113:10069    ClusterIP      1 => 10.0.1.154:69        
	 55    192.168.56.12:30688     NodePort       1 => 10.0.1.154:80        
	 56    0.0.0.0:30688           NodePort       1 => 10.0.1.154:80        
	 57    0.0.0.0:32523           NodePort       1 => 10.0.1.154:69        
	 59    192.168.56.12:32523     NodePort       1 => 10.0.1.154:69        
	 60    10.100.41.101:80        ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 61    192.168.56.12:32545     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 63    0.0.0.0:32545           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 64    10.100.59.183:80        ClusterIP      1 => 10.0.1.154:80        
	 67    192.168.56.12:30185     NodePort       1 => 10.0.1.154:80        
	 68    192.168.56.12:30185/i   NodePort       1 => 10.0.1.154:80        
	 69    0.0.0.0:30185           NodePort       1 => 10.0.1.154:80        
	 70    0.0.0.0:30185/i         NodePort       1 => 10.0.1.154:80        
	 71    10.111.188.14:20080     ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 72    10.111.188.14:20069     ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 73    192.0.2.233:20080       ExternalIPs    1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 74    192.0.2.233:20069       ExternalIPs    1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 76    192.168.56.12:30500     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 77    0.0.0.0:30500           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 79    192.168.56.12:31717     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 80    0.0.0.0:31717           NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 81    [fd03::17ea]:80         ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 82    [fd03::17ea]:69         ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 83    [fd03::9dd0]:10069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 84    [fd03::9dd0]:10080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 85    [fd04::12]:30326        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 86    [::]:30326              NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 87    [fd04::12]:32271        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 88    [::]:32271              NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 89    [fd03::9ef2]:10080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 90    [fd03::9ef2]:10069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 91    [fd04::12]:31036        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 92    [::]:31036              NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 93    [fd04::12]:30634        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 94    [::]:30634              NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 95    [fd03::4c18]:10080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 96    [fd03::4c18]:10069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 97    [fd04::12]:30738        NodePort       1 => [fd02::11d]:80       
	 98    [fd04::12]:30738/i      NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 99    [::]:30738              NodePort       1 => [fd02::11d]:80       
	 100   [::]:30738/i            NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 101   [fd04::12]:32241        NodePort       1 => [fd02::11d]:69       
	 102   [fd04::12]:32241/i      NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 103   [::]:32241              NodePort       1 => [fd02::11d]:69       
	 104   [::]:32241/i            NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 105   [fd03::cf7b]:10069      ClusterIP      1 => [fd02::169]:69       
	 106   [fd03::cf7b]:10080      ClusterIP      1 => [fd02::169]:80       
	 107   [fd04::12]:32744        NodePort       1 => [fd02::169]:69       
	 108   [fd04::12]:32744/i      NodePort       1 => [fd02::169]:69       
	 109   [::]:32744              NodePort       1 => [fd02::169]:69       
	 110   [::]:32744/i            NodePort       1 => [fd02::169]:69       
	 111   [fd04::12]:31873        NodePort       1 => [fd02::169]:80       
	 112   [fd04::12]:31873/i      NodePort       1 => [fd02::169]:80       
	 113   [::]:31873              NodePort       1 => [fd02::169]:80       
	 114   [::]:31873/i            NodePort       1 => [fd02::169]:80       
	 115   [fd03::a70a]:10080      ClusterIP      1 => [fd02::169]:80       
	 116   [fd03::a70a]:10069      ClusterIP      1 => [fd02::169]:69       
	 117   [fd04::12]:31830        NodePort       1 => [fd02::169]:80       
	 118   [::]:31830              NodePort       1 => [fd02::169]:80       
	 119   [fd04::12]:31010        NodePort       1 => [fd02::169]:69       
	 120   [::]:31010              NodePort       1 => [fd02::169]:69       
	 121   [fd03::f752]:20069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 122   [fd03::f752]:20080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 123   [fd03::999]:20069       ExternalIPs    1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 124   [fd03::999]:20080       ExternalIPs    1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 125   [fd04::12]:31071        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 126   [::]:31071              NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 127   [fd04::12]:32320        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 128   [::]:32320              NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 130   192.168.56.12:8080      HostPort       1 => 10.0.1.154:80        
	 131   0.0.0.0:8080            HostPort       1 => 10.0.1.154:80        
	 132   [fd04::12]:8080         HostPort       1 => [fd02::169]:80       
	 133   [::]:8080               HostPort       1 => [fd02::169]:80       
	 135   192.168.56.12:6969      HostPort       1 => 10.0.1.154:69        
	 136   0.0.0.0:6969            HostPort       1 => 10.0.1.154:69        
	 137   [fd04::12]:6969         HostPort       1 => [fd02::169]:69       
	 138   [::]:6969               HostPort       1 => [fd02::169]:69       
	 139   192.168.57.12:8080      HostPort       1 => 10.0.1.154:80        
	 140   [fd05::12]:8080         HostPort       1 => [fd02::169]:80       
	 141   192.168.57.12:6969      HostPort       1 => 10.0.1.154:69        
	 142   [fd05::12]:6969         HostPort       1 => [fd02::169]:69       
	 143   [fd05::12]:30738        NodePort       1 => [fd02::11d]:80       
	 144   [fd05::12]:30738/i      NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 145   [fd05::12]:32241        NodePort       1 => [fd02::11d]:69       
	 146   [fd05::12]:32241/i      NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 147   [fd05::12]:31873        NodePort       1 => [fd02::169]:80       
	 148   [fd05::12]:31873/i      NodePort       1 => [fd02::169]:80       
	 149   [fd05::12]:32744        NodePort       1 => [fd02::169]:69       
	 150   [fd05::12]:32744/i      NodePort       1 => [fd02::169]:69       
	 151   192.168.57.12:31878     NodePort       1 => 10.0.1.88:80         
	 152   192.168.57.12:31878/i   NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 153   192.168.57.12:31250     NodePort       1 => 10.0.1.88:69         
	 154   192.168.57.12:31250/i   NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 155   192.168.57.12:32545     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 156   [fd05::12]:32271        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 157   [fd05::12]:30326        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 158   192.168.57.12:31662     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 159   192.168.57.12:30099     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 160   192.168.57.12:30608     NodePort       1 => 10.0.1.154:80        
	 161   192.168.57.12:30608/i   NodePort       1 => 10.0.1.154:80        
	 162   192.168.57.12:31501     NodePort       1 => 10.0.1.154:69        
	 163   192.168.57.12:31501/i   NodePort       1 => 10.0.1.154:69        
	 164   192.168.57.12:30185     NodePort       1 => 10.0.1.154:80        
	 165   192.168.57.12:30185/i   NodePort       1 => 10.0.1.154:80        
	 166   [fd05::12]:31036        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 167   [fd05::12]:30634        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 168   [fd05::12]:31830        NodePort       1 => [fd02::169]:80       
	 169   [fd05::12]:31010        NodePort       1 => [fd02::169]:69       
	 170   [fd05::12]:31071        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 171   [fd05::12]:32320        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 172   192.168.57.12:31761     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 173   192.168.57.12:32115     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 174   192.168.57.12:30688     NodePort       1 => 10.0.1.154:80        
	 175   192.168.57.12:32523     NodePort       1 => 10.0.1.154:69        
	 176   192.168.57.12:30500     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 177   192.168.57.12:31717     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s88p7 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                              
	 4          Disabled           Disabled          24011      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::11d   10.0.1.88    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                             
	                                                            k8s:zgroup=testDS                                                                                                   
	 74         Disabled           Disabled          30796      k8s:app=grafana                                                                    fd02::11c   10.0.1.234   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 827        Disabled           Disabled          15390      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::169   10.0.1.154   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                             
	                                                            k8s:zgroup=test-k8s2                                                                                                
	 1693       Disabled           Disabled          4          reserved:health                                                                    fd02::189   10.0.1.166   ready   
	 2177       Disabled           Disabled          41023      k8s:app=prometheus                                                                 fd02::1e0   10.0.1.94    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                              
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 3787       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                  ready   
	                                                            reserved:host                                                                                                       
	 3878       Disabled           Disabled          5446       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::1f0   10.0.1.245   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                             
	                                                            k8s:zgroup=testDSClient                                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-w6854 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID    Frontend                Service Type   Backend                   
	 1     10.96.0.1:443           ClusterIP      1 => 192.168.56.11:6443   
	 2     10.96.0.10:53           ClusterIP      1 => 10.0.0.179:53        
	 3     10.96.0.10:9153         ClusterIP      1 => 10.0.0.179:9153      
	 4     10.111.240.13:3000      ClusterIP      1 => 10.0.1.234:3000      
	 5     10.107.57.156:9090      ClusterIP      1 => 10.0.1.94:9090       
	 6     10.107.100.153:80       ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 7     10.107.100.153:69       ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 8     10.105.143.95:10069     ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 9     10.105.143.95:10080     ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 10    192.168.56.11:31662     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 11    0.0.0.0:31662           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 14    192.168.56.11:30099     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 15    0.0.0.0:30099           NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 16    10.96.69.228:10069      ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 17    10.96.69.228:10080      ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 19    192.168.56.11:32115     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 20    0.0.0.0:32115           NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 22    192.168.56.11:31761     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 23    0.0.0.0:31761           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 24    10.106.67.0:10080       ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 25    10.106.67.0:10069       ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 28    192.168.56.11:31878     NodePort       1 => 10.0.0.144:80        
	 29    192.168.56.11:31878/i   NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 30    0.0.0.0:31878           NodePort       1 => 10.0.0.144:80        
	 31    0.0.0.0:31878/i         NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 32    0.0.0.0:31250           NodePort       1 => 10.0.0.144:69        
	 33    0.0.0.0:31250/i         NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 36    192.168.56.11:31250     NodePort       1 => 10.0.0.144:69        
	 37    192.168.56.11:31250/i   NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 38    10.110.57.30:10080      ClusterIP      1 => 10.0.1.154:80        
	 39    10.110.57.30:10069      ClusterIP      1 => 10.0.1.154:69        
	 42    192.168.56.11:30608     NodePort                                 
	 43    192.168.56.11:30608/i   NodePort       1 => 10.0.1.154:80        
	 44    0.0.0.0:30608           NodePort                                 
	 45    0.0.0.0:30608/i         NodePort       1 => 10.0.1.154:80        
	 46    192.168.56.11:31501     NodePort                                 
	 47    192.168.56.11:31501/i   NodePort       1 => 10.0.1.154:69        
	 50    0.0.0.0:31501           NodePort                                 
	 51    0.0.0.0:31501/i         NodePort       1 => 10.0.1.154:69        
	 52    10.104.204.113:10080    ClusterIP      1 => 10.0.1.154:80        
	 53    10.104.204.113:10069    ClusterIP      1 => 10.0.1.154:69        
	 55    192.168.56.11:32523     NodePort       1 => 10.0.1.154:69        
	 56    0.0.0.0:32523           NodePort       1 => 10.0.1.154:69        
	 57    0.0.0.0:30688           NodePort       1 => 10.0.1.154:80        
	 59    192.168.56.11:30688     NodePort       1 => 10.0.1.154:80        
	 60    10.100.41.101:80        ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 61    192.168.56.11:32545     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 63    0.0.0.0:32545           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 64    10.100.59.183:80        ClusterIP      1 => 10.0.1.154:80        
	 65    192.168.56.11:30185     NodePort                                 
	 66    192.168.56.11:30185/i   NodePort       1 => 10.0.1.154:80        
	 69    0.0.0.0:30185           NodePort                                 
	 70    0.0.0.0:30185/i         NodePort       1 => 10.0.1.154:80        
	 71    10.111.188.14:20080     ClusterIP      1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 72    10.111.188.14:20069     ClusterIP      1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 73    192.0.2.233:20080       ExternalIPs    1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 74    192.0.2.233:20069       ExternalIPs    1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 76    192.168.56.11:30500     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 77    0.0.0.0:30500           NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 79    192.168.56.11:31717     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 80    0.0.0.0:31717           NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 81    [fd03::17ea]:80         ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 82    [fd03::17ea]:69         ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 83    [fd03::9dd0]:10080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 84    [fd03::9dd0]:10069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 85    [fd04::11]:32271        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 86    [::]:32271              NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 87    [fd04::11]:30326        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 88    [::]:30326              NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 89    [fd03::9ef2]:10080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 90    [fd03::9ef2]:10069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 91    [fd04::11]:31036        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 92    [::]:31036              NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 93    [fd04::11]:30634        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 94    [::]:30634              NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 95    [fd03::4c18]:10080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 96    [fd03::4c18]:10069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 97    [fd04::11]:30738        NodePort       1 => [fd02::8c]:80        
	 98    [fd04::11]:30738/i      NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 99    [::]:30738              NodePort       1 => [fd02::8c]:80        
	 100   [::]:30738/i            NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 101   [fd04::11]:32241        NodePort       1 => [fd02::8c]:69        
	 102   [fd04::11]:32241/i      NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 103   [::]:32241              NodePort       1 => [fd02::8c]:69        
	 104   [::]:32241/i            NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 105   [fd03::cf7b]:10069      ClusterIP      1 => [fd02::169]:69       
	 106   [fd03::cf7b]:10080      ClusterIP      1 => [fd02::169]:80       
	 107   [fd04::11]:32744        NodePort                                 
	 108   [fd04::11]:32744/i      NodePort       1 => [fd02::169]:69       
	 109   [::]:32744              NodePort                                 
	 110   [::]:32744/i            NodePort       1 => [fd02::169]:69       
	 111   [fd04::11]:31873        NodePort                                 
	 112   [fd04::11]:31873/i      NodePort       1 => [fd02::169]:80       
	 113   [::]:31873              NodePort                                 
	 114   [::]:31873/i            NodePort       1 => [fd02::169]:80       
	 115   [fd03::a70a]:10080      ClusterIP      1 => [fd02::169]:80       
	 116   [fd03::a70a]:10069      ClusterIP      1 => [fd02::169]:69       
	 117   [fd04::11]:31010        NodePort       1 => [fd02::169]:69       
	 118   [::]:31010              NodePort       1 => [fd02::169]:69       
	 119   [fd04::11]:31830        NodePort       1 => [fd02::169]:80       
	 120   [::]:31830              NodePort       1 => [fd02::169]:80       
	 121   [fd03::f752]:20080      ClusterIP      1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 122   [fd03::f752]:20069      ClusterIP      1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 123   [fd03::999]:20080       ExternalIPs    1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 124   [fd03::999]:20069       ExternalIPs    1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 125   [fd04::11]:32320        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 126   [::]:32320              NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 127   [::]:31071              NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 128   [fd04::11]:31071        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 129   192.168.57.11:30185     NodePort                                 
	 130   192.168.57.11:30185/i   NodePort       1 => 10.0.1.154:80        
	 131   192.168.57.11:31878     NodePort       1 => 10.0.0.144:80        
	 132   192.168.57.11:31878/i   NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 133   192.168.57.11:31250     NodePort       1 => 10.0.0.144:69        
	 134   192.168.57.11:31250/i   NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 135   [fd05::11]:30326        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 136   [fd05::11]:32271        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 137   [fd05::11]:30738        NodePort       1 => [fd02::8c]:80        
	 138   [fd05::11]:30738/i      NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 139   [fd05::11]:32241        NodePort       1 => [fd02::8c]:69        
	 140   [fd05::11]:32241/i      NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 141   [fd05::11]:31873        NodePort                                 
	 142   [fd05::11]:31873/i      NodePort       1 => [fd02::169]:80       
	 143   [fd05::11]:32744        NodePort                                 
	 144   [fd05::11]:32744/i      NodePort       1 => [fd02::169]:69       
	 145   192.168.57.11:31662     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 146   192.168.57.11:30099     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 147   192.168.57.11:32115     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 148   192.168.57.11:31761     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 149   192.168.57.11:31501     NodePort                                 
	 150   192.168.57.11:31501/i   NodePort       1 => 10.0.1.154:69        
	 151   192.168.57.11:30608     NodePort                                 
	 152   192.168.57.11:30608/i   NodePort       1 => 10.0.1.154:80        
	 153   192.168.57.11:30500     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 154   192.168.57.11:31717     NodePort       1 => 10.0.0.144:69        
	                                              2 => 10.0.1.88:69         
	 155   [fd05::11]:30634        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 156   [fd05::11]:31036        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 157   [fd05::11]:32320        NodePort       1 => [fd02::8c]:80        
	                                              2 => [fd02::11d]:80       
	 158   [fd05::11]:31071        NodePort       1 => [fd02::8c]:69        
	                                              2 => [fd02::11d]:69       
	 159   192.168.57.11:30688     NodePort       1 => 10.0.1.154:80        
	 160   192.168.57.11:32523     NodePort       1 => 10.0.1.154:69        
	 161   192.168.57.11:32545     NodePort       1 => 10.0.0.144:80        
	                                              2 => 10.0.1.88:80         
	 162   [fd05::11]:31830        NodePort       1 => [fd02::169]:80       
	 163   [fd05::11]:31010        NodePort       1 => [fd02::169]:69       
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-w6854 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                       
	 2446       Disabled           Disabled          4          reserved:health                                                              fd02::3e   10.0.0.7     ready   
	 2474       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                           ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                    
	                                                            k8s:node-role.kubernetes.io/master                                                                           
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                  
	                                                            reserved:host                                                                                                
	 3300       Disabled           Disabled          909        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::48   10.0.0.179   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                              
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                  
	                                                            k8s:k8s-app=kube-dns                                                                                         
	 3484       Disabled           Disabled          24011      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::8c   10.0.0.144   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=testDS                                                                                            
	 4074       Disabled           Disabled          5446       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::d3   10.0.0.14    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=testDSClient                                                                                      
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
01:48:23 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
01:48:23 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|1a7c8ff8_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_secondary_NodePort_device.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//47/artifact/1a7c8ff8_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_secondary_NodePort_device.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//47/artifact/d7384912_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//47/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.19_47_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19/47/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@joestringer joestringer moved this from To quarantine/disable to Unassigned in 1.11 CI Dec 3, 2021
@borkmann borkmann added the sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. label Dec 6, 2021
@borkmann
Copy link
Member

borkmann commented Dec 8, 2021

Potential dup of #17895, waiting until the latter is resolved.

@aanm aanm added the area/CI Continuous Integration testing issue or flake label Jan 6, 2022
@brb
Copy link
Member

brb commented May 10, 2022

Is anyone still hitting this?

@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. and removed stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. labels Jul 10, 2022
@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Sep 14, 2022
@github-actions
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

1.11 CI automation moved this from Unassigned (Datapath) to Evaluate to exit quarantine Sep 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
No open projects
1.11 CI
Evaluate to exit quarantine
Development

No branches or pull requests

4 participants