You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CI: Suite-k8s-1.20.K8sServicesTest Checks service across nodes Tests NodePort (kube-proxy) with the host firewall and externalTrafficPolicy=Local: Exit status 42
#15103
Closed
joestringer opened this issue
Feb 24, 2021
· 2 comments
joestringer opened this issue
Feb 24, 2021
· 2 comments
Labels
area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:514
Request from k8s1 to service http://192.168.36.12:31372 failed
Expected command: kubectl exec -n kube-system log-gatherer-lprgb -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.12:31372 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi'
To succeed, but it failed:
Exitcode: 42
Err: exit status 42
Stdout:
failed: :5674/1=28:5674/2=28:5674/3=28:5674/4=28:5674/5=28:5674/6=28:5674/7=28:5674/8=28:5674/9=28:5674/10=28
Stderr:
command terminated with exit code 42
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8sT/Services.go:1583
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-562xz cilium-r6hpl]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
prometheus-655fb888d7-l8b9k
test-k8s2-79ff876c9d-r87fl
testclient-hpcsv
testclient-l9vnl
testds-g257g
testds-gqp2x
coredns-867bf6789f-4sfwk
grafana-d69c97b9b-w9l5j
Cilium agent 'cilium-562xz': Status: Ok Health: Ok Nodes "" Kubernetes: Ok KVstore: Ok Controllers: Total 31 Failed 0
Cilium agent 'cilium-r6hpl': Status: Ok Health: Ok Nodes "" Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
The text was updated successfully, but these errors were encountered:
joestringer
added
area/CI
Continuous Integration testing issue or flake
ci/flake
This is a known failure that occurs in the tree. Please investigate me!
labels
Feb 24, 2021
This increases the curl connection timeout from 5 to 15 seconds to avoid
issues with IPCache propagation delay. On Cilium master an 1.10, it
seems that IPCache updates in CI can take up to 4-8 seconds.
CI flakes likely caused by the increased IPCache propagation delay:
- cilium#13839
- cilium#14959
- cilium#15103
- cilium#16237
Signed-off-by: Sebastian Wicki <sebastian@isovalent.com>
area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
K8sServicesTest Checks service across nodes Tests NodePort (kube-proxy) with the host firewall and externalTrafficPolicy=Local
Kubernetes 1.20
Cilium v1.10 dev cycle
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/717/testReport/junit/Suite-k8s-1/20/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort__kube_proxy__with_the_host_firewall_and_externalTrafficPolicy_Local/
Artifacts are too big to attach.
Failed on #14905 which only changes some logging in the operator + aws-specific codepaths which are not tested by this job.
Possibly related to #13839, #13011, #12690.
Stacktrace
Standard Output
Standard Error
The text was updated successfully, but these errors were encountered: