Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy Tests NodePort with L7 Policy #23258

Closed
maintainer-s-little-helper bot opened this issue Jan 23, 2023 · 1 comment · Fixed by #23346
Labels
area/proxy Impacts proxy components, including DNS, Kafka, Envoy and/or XDS servers. ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy Tests NodePort with L7 Policy

Failure Output

FAIL: Request from k8s1 to service http://10.98.246.115:10080 failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Request from k8s1 to service http://10.98.246.115:10080 failed
Expected command: kubectl exec -n kube-system log-gatherer-hlhkh -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.98.246.115:10080 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/2
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=17d1da1a-791f-455a-b879-e189a00459a4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/2 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/3
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=a0f4024e-d8f9-4bad-83e1-602c4c2b495f
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/3 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/4
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=d3afd482-2b01-4aea-8a7a-b938adea23e9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/4 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/5
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=943cc49c-9d69-4652-bd0c-f6f3470dcac5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/5 exit code: 0
	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/6
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=f9e2c96c-6366-4bc3-8719-fa1104e1a80f
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/6 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/7
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=5620c4d4-8b1e-4420-aa18-293fe19090e0
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/7 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/8
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=201fc49a-df5e-4f9c-a6ce-8e9d099524ed
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/8 exit code: 0
	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/9
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=74d968ea-346e-4ecf-a246-5be51ecdff26
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/9 exit code: 0
	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/10
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=54c545a7-714d-49e5-93bd-a86ef65d29f3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/10 exit code: 0
	 failed: :11673/1=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/k8s/service_helpers.go:532

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-2c7jq cilium-g98wt]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::allow-all-within-namespace default::l7-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
test-k8s2-7f6b9cdc7c-mq9f6   false     false
testclient-gcrrx             false     false
testclient-zbqdh             false     false
testds-c424g                 false     false
app1-6698f67795-c775n        false     false
app1-6698f67795-d5x2g        false     false
echo-748bf97b8f-pm5pk        false     false
testds-ldwkp                 false     false
coredns-86c74c674b-z6d9p     false     false
app2-558747984b-gkz5d        false     false
app3-5cc776d4f9-xjrn5        false     false
echo-748bf97b8f-xxp4f        false     false
Cilium agent 'cilium-2c7jq': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 56 Failed 0
Cilium agent 'cilium-g98wt': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 46 Failed 0


Standard Error

Click to show.
13:57:09 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy
13:57:09 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/l7-policy-demo.yaml
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:30048/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:30879"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:30048/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:30048/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:30879"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:30879"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:30879"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30048/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.98.246.115:10069/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:30879"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30048/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:30879"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:30048/hello"
13:57:17 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.98.246.115:10080"
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service tftp://10.98.246.115:10069/hello
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service tftp://192.168.56.12:30048/hello
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service http://10.98.246.115:10080
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service http://[::ffff:192.168.56.11]:30879
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service http://192.168.56.11:30879
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service http://[::ffff:192.168.56.12]:30879
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service tftp://192.168.56.11:30048/hello
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service tftp://[::ffff:192.168.56.11]:30048/hello
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service tftp://[::ffff:192.168.56.12]:30048/hello
13:57:17 STEP: Making 10 curl requests from testclient-gcrrx pod to service http://192.168.56.12:30879
13:57:17 STEP: Making 10 curl requests from testclient-zbqdh pod to service tftp://192.168.56.12:30048/hello
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service http://10.98.246.115:10080
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service http://192.168.56.11:30879
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service http://[::ffff:192.168.56.11]:30879
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service tftp://10.98.246.115:10069/hello
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service tftp://192.168.56.11:30048/hello
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service tftp://[::ffff:192.168.56.12]:30048/hello
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service http://[::ffff:192.168.56.12]:30879
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service http://192.168.56.12:30879
13:57:18 STEP: Making 10 curl requests from testclient-zbqdh pod to service tftp://[::ffff:192.168.56.11]:30048/hello
FAIL: Request from k8s1 to service http://10.98.246.115:10080 failed
Expected command: kubectl exec -n kube-system log-gatherer-hlhkh -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.98.246.115:10080 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/2
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=17d1da1a-791f-455a-b879-e189a00459a4
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/2 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/3
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=a0f4024e-d8f9-4bad-83e1-602c4c2b495f
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/3 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/4
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=d3afd482-2b01-4aea-8a7a-b938adea23e9
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/4 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/5
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=943cc49c-9d69-4652-bd0c-f6f3470dcac5
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/5 exit code: 0
	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/6
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=f9e2c96c-6366-4bc3-8719-fa1104e1a80f
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/6 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/7
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=5620c4d4-8b1e-4420-aa18-293fe19090e0
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/7 exit code: 0
	 
	 
	 Hostname: testds-c424g
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/8
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=201fc49a-df5e-4f9c-a6ce-8e9d099524ed
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/8 exit code: 0
	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/9
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=74d968ea-346e-4ecf-a246-5be51ecdff26
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/9 exit code: 0
	 
	 
	 Hostname: testds-ldwkp
	 
	 Pod Information:
	 	-no pod information available-
	 
	 Server values:
	 	server_version=nginx: 1.13.3 - lua: 10008
	 
	 Request Information:
	 	client_address=::ffff:10.0.2.15
	 	method=GET
	 	real path=/
	 	query=
	 	request_version=1.1
	 	request_scheme=http
	 	request_uri=http://10.98.246.115:80/
	 
	 Request Headers:
	 	accept=*/*
	 	host=10.98.246.115:10080
	 	user-agent=cilium-test-11673/10
	 	x-envoy-expected-rq-timeout-ms=3600000
	 	x-forwarded-proto=http
	 	x-request-id=54c545a7-714d-49e5-93bd-a86ef65d29f3
	 
	 Request Body:
	 	-no body in request-
	 
	 Test round 11673/10 exit code: 0
	 failed: :11673/1=28
	 
Stderr:
 	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 HTTP/1.1 200 OK
	 date: Mon, 23 Jan 2023 13:57:23 GMT
	 content-type: text/plain
	 server: envoy
	 x-envoy-upstream-service-time: 0
	 transfer-encoding: chunked
	 
	 command terminated with exit code 42
	 

=== Test Finished at 2023-01-23T13:57:23Z====
13:57:23 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
13:57:23 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-54dbdc987-pxw5c            0/1     Running   0          72m     10.0.1.128      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-6ff848df8b-pzd46        1/1     Running   0          72m     10.0.1.212      k8s2   <none>           <none>
	 default             app1-6698f67795-c775n              2/2     Running   0          4m26s   10.0.1.68       k8s1   <none>           <none>
	 default             app1-6698f67795-d5x2g              2/2     Running   0          4m26s   10.0.1.234      k8s1   <none>           <none>
	 default             app2-558747984b-gkz5d              1/1     Running   0          4m26s   10.0.1.177      k8s1   <none>           <none>
	 default             app3-5cc776d4f9-xjrn5              1/1     Running   0          4m26s   10.0.1.48       k8s1   <none>           <none>
	 default             echo-748bf97b8f-pm5pk              2/2     Running   0          4m25s   10.0.1.181      k8s1   <none>           <none>
	 default             echo-748bf97b8f-xxp4f              2/2     Running   0          4m25s   10.0.0.72       k8s2   <none>           <none>
	 default             test-k8s2-7f6b9cdc7c-mq9f6         2/2     Running   0          4m26s   10.0.0.153      k8s2   <none>           <none>
	 default             testclient-gcrrx                   1/1     Running   0          4m26s   10.0.1.42       k8s1   <none>           <none>
	 default             testclient-zbqdh                   1/1     Running   0          4m26s   10.0.0.89       k8s2   <none>           <none>
	 default             testds-c424g                       2/2     Running   0          4m26s   10.0.0.244      k8s2   <none>           <none>
	 default             testds-ldwkp                       2/2     Running   0          4m26s   10.0.1.66       k8s1   <none>           <none>
	 kube-system         cilium-2c7jq                       1/1     Running   0          2m20s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-g98wt                       1/1     Running   0          2m20s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-85b94dd675-4bdlj   1/1     Running   0          2m20s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-85b94dd675-z2zpw   1/1     Running   0          2m20s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-86c74c674b-z6d9p           1/1     Running   0          7m48s   10.0.0.79       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          76m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          76m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   4          76m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-n4bj2                   1/1     Running   0          74m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-szzhn                   1/1     Running   0          72m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   5          76m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-4rb44                 1/1     Running   0          72m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-hlhkh                 1/1     Running   0          72m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-ptpm2               1/1     Running   0          72m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-xnxw9               1/1     Running   0          72m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2c7jq cilium-g98wt]
cmd: kubectl exec -n kube-system cilium-2c7jq -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.219.233:3000     ClusterIP                                         
	 3    10.96.0.10:53          ClusterIP      1 => 10.0.0.79:53 (active)         
	 4    10.96.0.10:9153        ClusterIP      1 => 10.0.0.79:9153 (active)       
	 5    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 6    10.101.12.74:9090      ClusterIP      1 => 10.0.1.212:9090 (active)      
	 8    10.96.216.212:80       ClusterIP      1 => 10.0.1.68:80 (active)         
	                                            2 => 10.0.1.234:80 (active)        
	 9    10.96.216.212:69       ClusterIP      1 => 10.0.1.68:69 (active)         
	                                            2 => 10.0.1.234:69 (active)        
	 10   10.105.116.126:80      ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 11   10.105.116.126:69      ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 12   10.98.246.115:10080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 13   10.98.246.115:10069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 14   10.108.80.117:10080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 15   10.108.80.117:10069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 16   10.99.107.120:10069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 17   10.99.107.120:10080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 18   10.101.29.163:10080    ClusterIP      1 => 10.0.0.153:80 (active)        
	 19   10.101.29.163:10069    ClusterIP      1 => 10.0.0.153:69 (active)        
	 20   10.97.116.245:80       ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 21   10.106.224.132:10080   ClusterIP      1 => 10.0.0.153:80 (active)        
	 22   10.106.224.132:10069   ClusterIP      1 => 10.0.0.153:69 (active)        
	 23   10.108.188.24:80       ClusterIP      1 => 10.0.0.153:80 (active)        
	 24   10.101.194.80:20069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 25   10.101.194.80:20080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 26   10.100.222.85:69       ClusterIP      1 => 10.0.0.72:69 (active)         
	                                            2 => 10.0.1.181:69 (active)        
	 27   10.100.222.85:80       ClusterIP      1 => 10.0.0.72:80 (active)         
	                                            2 => 10.0.1.181:80 (active)        
	 28   10.107.161.93:443      ClusterIP      1 => 192.168.56.12:4244 (active)   
	                                            2 => 192.168.56.11:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-2c7jq -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                            IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                  
	 45         Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                      ready   
	                                                            k8s:node-role.kubernetes.io/master                                                      
	                                                            reserved:host                                                                           
	 230        Disabled           Disabled          28166      k8s:id=app3                                            fd02::1e4   10.0.1.48    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                         
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:zgroup=testapp                                                                      
	 489        Disabled           Disabled          35975      k8s:io.cilium.k8s.policy.cluster=default               fd02::10f   10.0.1.42    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                         
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:zgroup=testDSClient                                                                 
	 624        Disabled           Disabled          17493      k8s:appSecond=true                                     fd02::13f   10.0.1.177   ready   
	                                                            k8s:id=app2                                                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:zgroup=testapp                                                                      
	 783        Enabled            Disabled          52279      k8s:io.cilium.k8s.policy.cluster=default               fd02::119   10.0.1.66    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                         
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:zgroup=testDS                                                                       
	 2270       Disabled           Disabled          5976       k8s:id=app1                                            fd02::13d   10.0.1.68    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:zgroup=testapp                                                                      
	 2355       Disabled           Disabled          4          reserved:health                                        fd02::16e   10.0.1.148   ready   
	 3413       Enabled            Enabled           18906      k8s:io.cilium.k8s.policy.cluster=default               fd02::11d   10.0.1.181   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                         
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:name=echo                                                                           
	 3769       Disabled           Disabled          5976       k8s:id=app1                                            fd02::175   10.0.1.234   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                 
	                                                            k8s:zgroup=testapp                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-g98wt -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                            
	 1    10.96.219.233:3000     ClusterIP                                         
	 3    10.96.0.10:9153        ClusterIP      1 => 10.0.0.79:9153 (active)       
	 4    10.96.0.10:53          ClusterIP      1 => 10.0.0.79:53 (active)         
	 5    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443 (active)   
	 6    10.101.12.74:9090      ClusterIP      1 => 10.0.1.212:9090 (active)      
	 8    10.96.216.212:80       ClusterIP      1 => 10.0.1.68:80 (active)         
	                                            2 => 10.0.1.234:80 (active)        
	 9    10.96.216.212:69       ClusterIP      1 => 10.0.1.68:69 (active)         
	                                            2 => 10.0.1.234:69 (active)        
	 10   10.105.116.126:69      ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 11   10.105.116.126:80      ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 12   10.98.246.115:10080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 13   10.98.246.115:10069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 14   10.108.80.117:10080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 15   10.108.80.117:10069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 16   10.99.107.120:10069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 17   10.99.107.120:10080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 18   10.101.29.163:10080    ClusterIP      1 => 10.0.0.153:80 (active)        
	 19   10.101.29.163:10069    ClusterIP      1 => 10.0.0.153:69 (active)        
	 20   10.97.116.245:80       ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 21   10.106.224.132:10080   ClusterIP      1 => 10.0.0.153:80 (active)        
	 22   10.106.224.132:10069   ClusterIP      1 => 10.0.0.153:69 (active)        
	 23   10.108.188.24:80       ClusterIP      1 => 10.0.0.153:80 (active)        
	 24   10.101.194.80:20069    ClusterIP      1 => 10.0.0.244:69 (active)        
	                                            2 => 10.0.1.66:69 (active)         
	 25   10.101.194.80:20080    ClusterIP      1 => 10.0.0.244:80 (active)        
	                                            2 => 10.0.1.66:80 (active)         
	 26   10.100.222.85:80       ClusterIP      1 => 10.0.0.72:80 (active)         
	                                            2 => 10.0.1.181:80 (active)        
	 27   10.100.222.85:69       ClusterIP      1 => 10.0.0.72:69 (active)         
	                                            2 => 10.0.1.181:69 (active)        
	 28   10.107.161.93:443      ClusterIP      1 => 192.168.56.12:4244 (active)   
	                                            2 => 192.168.56.11:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-g98wt -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                            
	 314        Disabled           Disabled          20343      k8s:io.cilium.k8s.policy.cluster=default          fd02::44   10.0.0.79    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                       
	                                                            k8s:k8s-app=kube-dns                                                              
	 461        Disabled           Disabled          47988      k8s:io.cilium.k8s.policy.cluster=default          fd02::8c   10.0.0.153   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:zgroup=test-k8s2                                                              
	 890        Enabled            Enabled           18906      k8s:io.cilium.k8s.policy.cluster=default          fd02::47   10.0.0.72    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:name=echo                                                                     
	 1460       Disabled           Disabled          35975      k8s:io.cilium.k8s.policy.cluster=default          fd02::fd   10.0.0.89    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:zgroup=testDSClient                                                           
	 1634       Enabled            Disabled          52279      k8s:io.cilium.k8s.policy.cluster=default          fd02::2d   10.0.0.244   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:zgroup=testDS                                                                 
	 2381       Disabled           Disabled          4          reserved:health                                   fd02::aa   10.0.0.82    ready   
	 3924       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                ready   
	                                                            reserved:host                                                                     
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:57:38 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|087007e1_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_with_L7_policy_Tests_NodePort_with_L7_Policy.zip]]
13:57:40 STEP: Running AfterAll block for EntireTestsuite K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy
13:57:44 STEP: Running AfterAll block for EntireTestsuite K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc)


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2370/artifact/087007e1_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_with_L7_policy_Tests_NodePort_with_L7_Policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2370/artifact/test_results_Cilium-PR-K8s-1.18-kernel-4.9_2370_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9/2370/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Jan 23, 2023
@pchaigno
Copy link
Member

This regression was introduced by #21980. cc @brb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/proxy Impacts proxy components, including DNS, Kafka, Envoy and/or XDS servers. ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant