Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sAgentPolicyTest Basic Test TLS policy #22404

Closed
maintainer-s-little-helper bot opened this issue Nov 28, 2022 · 7 comments · Fixed by #24414
Closed

CI: K8sAgentPolicyTest Basic Test TLS policy #22404

maintainer-s-little-helper bot opened this issue Nov 28, 2022 · 7 comments · Fixed by #24414
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! pinned These issues are not marked stale by our issue bot.

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sAgentPolicyTest Basic Test TLS policy

Failure Output

FAIL: Cannot connect from "app3-79897dfc85-6tkg2" to 'https://www.lyft.com:443/privacy'

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Cannot connect from "app3-79897dfc85-6tkg2" to 'https://www.lyft.com:443/privacy'
Expected command: kubectl exec -n 202211281654k8sagentpolicytestbasictesttlspolicy app3-79897dfc85-6tkg2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 -4 -v --cacert /cacert.pem https://www.lyft.com:443/privacy -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 5 
To succeed, but it failed:
Exitcode: 60 
Err: exit status 60
Stdout:
 	 time-> DNS: '0.038771(18.154.242.24)', Connect: '0.040439',Transfer '0.000000', total '0.044553'
Stderr:
 	 *   Trying 18.154.242.24...
	 * TCP_NODELAY set
	 * Connected to www.lyft.com (18.154.242.24) port 443 (#0)
	 * ALPN, offering http/1.1
	 * successfully set certificate verify locations:
	 *   CAfile: /cacert.pem
	   CApath: none
	 * TLSv1.2 (OUT), TLS handshake, Client hello (1):
	 } [512 bytes data]
	 * TLSv1.2 (IN), TLS handshake, Server hello (2):
	 { [106 bytes data]
	 * TLSv1.2 (IN), TLS handshake, Certificate (11):
	 { [4957 bytes data]
	 * TLSv1.2 (OUT), TLS alert, unknown CA (560):
	 } [2 bytes data]
	 * SSL certificate problem: unable to get local issuer certificate
	 * Closing connection 0
	 command terminated with exit code 60
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-net-next/src/github.com/cilium/cilium/test/k8s/net_policies.go:178

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 1
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Key allocation attempt failed
Cilium pods: [cilium-g742p cilium-zhbj5]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202211281654k8sagentpolicytestbasictesttlspolicy::l7-policy-tls 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
app1-84dcf689db-r4nq6         false     false
app2-6b6857744b-m7j8h         false     false
app3-79897dfc85-6tkg2         false     false
grafana-59957b9549-5xgb5      false     false
prometheus-7c8c9684bb-87clt   false     false
coredns-567b6dd84-fx49k       false     false
app1-84dcf689db-nh8sb         false     false
Cilium agent 'cilium-g742p': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0
Cilium agent 'cilium-zhbj5': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 24 Failed 0


Standard Error

Click to show.
16:53:43 STEP: Running BeforeAll block for EntireTestsuite K8sAgentPolicyTest
16:53:43 STEP: Ensuring the namespace kube-system exists
16:53:43 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
16:53:43 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
16:53:43 STEP: Installing Cilium
16:53:44 STEP: Waiting for Cilium to become ready
16:54:05 STEP: Validating if Kubernetes DNS is deployed
16:54:05 STEP: Checking if deployment is ready
16:54:05 STEP: Checking if kube-dns service is plumbed correctly
16:54:05 STEP: Checking if DNS can resolve
16:54:05 STEP: Checking if pods have identity
16:54:06 STEP: Kubernetes DNS is not ready: %!s(<nil>)
16:54:06 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
16:54:06 STEP: Waiting for Kubernetes DNS to become operational
16:54:06 STEP: Checking if deployment is ready
16:54:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:07 STEP: Checking if deployment is ready
16:54:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:08 STEP: Checking if deployment is ready
16:54:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:09 STEP: Checking if deployment is ready
16:54:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:10 STEP: Checking if deployment is ready
16:54:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:11 STEP: Checking if deployment is ready
16:54:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:12 STEP: Checking if deployment is ready
16:54:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:54:13 STEP: Checking if deployment is ready
16:54:14 STEP: Checking if kube-dns service is plumbed correctly
16:54:14 STEP: Checking if pods have identity
16:54:14 STEP: Checking if DNS can resolve
16:54:14 STEP: Validating Cilium Installation
16:54:14 STEP: Performing Cilium controllers preflight check
16:54:14 STEP: Performing Cilium status preflight check
16:54:14 STEP: Performing Cilium health check
16:54:14 STEP: Checking whether host EP regenerated
16:54:15 STEP: Performing Cilium service preflight check
16:54:15 STEP: Performing K8s service preflight check
16:54:15 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-g742p': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

16:54:15 STEP: Performing Cilium controllers preflight check
16:54:15 STEP: Performing Cilium status preflight check
16:54:15 STEP: Performing Cilium health check
16:54:15 STEP: Checking whether host EP regenerated
16:54:17 STEP: Performing Cilium service preflight check
16:54:17 STEP: Performing K8s service preflight check
16:54:17 STEP: Performing Cilium status preflight check
16:54:17 STEP: Performing Cilium health check
16:54:17 STEP: Performing Cilium controllers preflight check
16:54:17 STEP: Checking whether host EP regenerated
16:54:18 STEP: Performing Cilium service preflight check
16:54:18 STEP: Performing K8s service preflight check
16:54:22 STEP: Cilium is not ready yet: host EP is not ready: cilium-agent "cilium-g742p" host EP is not in ready state: "regenerating"
16:54:22 STEP: Performing Cilium controllers preflight check
16:54:22 STEP: Performing Cilium status preflight check
16:54:22 STEP: Performing Cilium health check
16:54:22 STEP: Checking whether host EP regenerated
16:54:23 STEP: Performing Cilium service preflight check
16:54:23 STEP: Performing K8s service preflight check
16:54:24 STEP: Performing Cilium controllers preflight check
16:54:24 STEP: Checking whether host EP regenerated
16:54:24 STEP: Performing Cilium status preflight check
16:54:24 STEP: Performing Cilium health check
16:54:25 STEP: Performing Cilium service preflight check
16:54:25 STEP: Performing K8s service preflight check
16:54:26 STEP: Performing Cilium controllers preflight check
16:54:26 STEP: Performing Cilium health check
16:54:26 STEP: Checking whether host EP regenerated
16:54:26 STEP: Performing Cilium status preflight check
16:54:27 STEP: Performing Cilium service preflight check
16:54:27 STEP: Performing K8s service preflight check
16:54:29 STEP: Waiting for cilium-operator to be ready
16:54:29 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
16:54:29 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
16:54:29 STEP: Running BeforeAll block for EntireTestsuite K8sAgentPolicyTest Basic Test
16:54:29 STEP: Deleting namespace 202211281654k8sagentpolicytestbasictesttlspolicy
16:54:29 STEP: Creating namespace 202211281654k8sagentpolicytestbasictesttlspolicy
16:54:29 STEP: WaitforPods(namespace="202211281654k8sagentpolicytestbasictesttlspolicy", filter="-l zgroup=testapp")
16:54:36 STEP: WaitforPods(namespace="202211281654k8sagentpolicytestbasictesttlspolicy", filter="-l zgroup=testapp") => <nil>
16:54:36 STEP: Running BeforeEach block for EntireTestsuite K8sAgentPolicyTest Basic Test
16:54:38 STEP: WaitforPods(namespace="202211281654k8sagentpolicytestbasictesttlspolicy", filter="-l zgroup=testapp")
16:54:38 STEP: WaitforPods(namespace="202211281654k8sagentpolicytestbasictesttlspolicy", filter="-l zgroup=testapp") => <nil>
16:54:38 STEP: Testing L7 Policy with TLS
16:54:41 STEP: Testing L7 Policy with TLS without HTTP rules
FAIL: Cannot connect from "app3-79897dfc85-6tkg2" to 'https://www.lyft.com:443/privacy'
Expected command: kubectl exec -n 202211281654k8sagentpolicytestbasictesttlspolicy app3-79897dfc85-6tkg2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 -4 -v --cacert /cacert.pem https://www.lyft.com:443/privacy -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 5 
To succeed, but it failed:
Exitcode: 60 
Err: exit status 60
Stdout:
 	 time-> DNS: '0.038771(18.154.242.24)', Connect: '0.040439',Transfer '0.000000', total '0.044553'
Stderr:
 	 *   Trying 18.154.242.24...
	 * TCP_NODELAY set
	 * Connected to www.lyft.com (18.154.242.24) port 443 (#0)
	 * ALPN, offering http/1.1
	 * successfully set certificate verify locations:
	 *   CAfile: /cacert.pem
	   CApath: none
	 * TLSv1.2 (OUT), TLS handshake, Client hello (1):
	 } [512 bytes data]
	 * TLSv1.2 (IN), TLS handshake, Server hello (2):
	 { [106 bytes data]
	 * TLSv1.2 (IN), TLS handshake, Certificate (11):
	 { [4957 bytes data]
	 * TLSv1.2 (OUT), TLS alert, unknown CA (560):
	 } [2 bytes data]
	 * SSL certificate problem: unable to get local issuer certificate
	 * Closing connection 0
	 command terminated with exit code 60
	 

=== Test Finished at 2022-11-28T16:54:41Z====
16:54:41 STEP: Running JustAfterEach block for EntireTestsuite K8sAgentPolicyTest
===================== TEST FAILED =====================
16:54:41 STEP: Running AfterFailed block for EntireTestsuite K8sAgentPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                          NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202211281654k8sagentpolicytestbasictesttlspolicy   app1-84dcf689db-nh8sb             2/2     Running   0          14s     10.0.0.62       k8s1   <none>           <none>
	 202211281654k8sagentpolicytestbasictesttlspolicy   app1-84dcf689db-r4nq6             2/2     Running   0          14s     10.0.0.64       k8s1   <none>           <none>
	 202211281654k8sagentpolicytestbasictesttlspolicy   app2-6b6857744b-m7j8h             1/1     Running   0          14s     10.0.0.169      k8s1   <none>           <none>
	 202211281654k8sagentpolicytestbasictesttlspolicy   app3-79897dfc85-6tkg2             1/1     Running   0          14s     10.0.0.4        k8s1   <none>           <none>
	 cilium-monitoring                                  grafana-59957b9549-5xgb5          1/1     Running   0          6m18s   10.0.0.206      k8s1   <none>           <none>
	 cilium-monitoring                                  prometheus-7c8c9684bb-87clt       1/1     Running   0          6m18s   10.0.0.139      k8s1   <none>           <none>
	 kube-system                                        cilium-g742p                      1/1     Running   0          59s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        cilium-operator-c48d8fcd6-96mzz   1/1     Running   0          59s     192.168.56.13   k8s3   <none>           <none>
	 kube-system                                        cilium-operator-c48d8fcd6-b77tx   1/1     Running   0          59s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                        cilium-zhbj5                      1/1     Running   0          59s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                        coredns-567b6dd84-fx49k           1/1     Running   0          37s     10.0.1.3        k8s2   <none>           <none>
	 kube-system                                        etcd-k8s1                         1/1     Running   0          16m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        kube-apiserver-k8s1               1/1     Running   0          16m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        kube-controller-manager-k8s1      1/1     Running   0          16m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        kube-scheduler-k8s1               1/1     Running   0          16m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        log-gatherer-frh4x                1/1     Running   0          6m30s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                        log-gatherer-jphg7                1/1     Running   0          6m30s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        log-gatherer-xp4jc                1/1     Running   0          6m30s   192.168.56.13   k8s3   <none>           <none>
	 kube-system                                        registry-adder-d7xkq              1/1     Running   0          7m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                        registry-adder-dgljm              1/1     Running   0          7m5s    192.168.56.13   k8s3   <none>           <none>
	 kube-system                                        registry-adder-hl9xr              1/1     Running   0          7m5s    192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-g742p cilium-zhbj5]
cmd: kubectl exec -n kube-system cilium-g742p -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend            Service Type   Backend                            
	 2    10.96.0.1:443       ClusterIP      1 => 192.168.56.11:6443 (active)   
	 3    10.96.0.10:53       ClusterIP      1 => 10.0.1.3:53 (active)          
	 4    10.96.0.10:9153     ClusterIP      1 => 10.0.1.3:9153 (active)        
	 5    10.104.0.225:3000   ClusterIP      1 => 10.0.0.206:3000 (active)      
	 6    10.99.79.38:9090    ClusterIP      1 => 10.0.0.139:9090 (active)      
	 7    10.109.199.10:443   ClusterIP      1 => 192.168.56.12:4244 (active)   
	                                         2 => 192.168.56.11:4244 (active)   
	 8    10.98.55.232:80     ClusterIP      1 => 10.0.0.64:80 (active)         
	                                         2 => 10.0.0.62:80 (active)         
	 9    10.98.55.232:69     ClusterIP      1 => 10.0.0.64:69 (active)         
	                                         2 => 10.0.0.62:69 (active)         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-g742p -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                            
	 357        Disabled           Disabled          17930      k8s:id=app1                                                                                                       fd02::57   10.0.0.62    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202211281654k8sagentpolicytestbasictesttlspolicy                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202211281654k8sagentpolicytestbasictesttlspolicy                                                                  
	                                                            k8s:zgroup=testapp                                                                                                                                
	 1220       Disabled           Disabled          4          reserved:health                                                                                                   fd02::b    10.0.0.49    ready   
	 1708       Disabled           Disabled          4761       k8s:app=prometheus                                                                                                fd02::5c   10.0.0.139   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                 
	 2111       Disabled           Enabled           3298       k8s:id=app3                                                                                                       fd02::3a   10.0.0.4     ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202211281654k8sagentpolicytestbasictesttlspolicy                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202211281654k8sagentpolicytestbasictesttlspolicy                                                                  
	                                                            k8s:zgroup=testapp                                                                                                                                
	 2322       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                         
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                       
	                                                            reserved:host                                                                                                                                     
	 2911       Disabled           Disabled          36301      k8s:app=grafana                                                                                                   fd02::e0   10.0.0.206   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                 
	 3254       Disabled           Enabled           1182       k8s:appSecond=true                                                                                                fd02::ba   10.0.0.169   ready   
	                                                            k8s:id=app2                                                                                                                                       
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202211281654k8sagentpolicytestbasictesttlspolicy                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202211281654k8sagentpolicytestbasictesttlspolicy                                                                  
	                                                            k8s:zgroup=testapp                                                                                                                                
	 3676       Disabled           Disabled          17930      k8s:id=app1                                                                                                       fd02::e2   10.0.0.64    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202211281654k8sagentpolicytestbasictesttlspolicy                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202211281654k8sagentpolicytestbasictesttlspolicy                                                                  
	                                                            k8s:zgroup=testapp                                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zhbj5 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend            Service Type   Backend                            
	 1    10.96.0.1:443       ClusterIP      1 => 192.168.56.11:6443 (active)   
	 2    10.96.0.10:53       ClusterIP      1 => 10.0.1.3:53 (active)          
	 3    10.96.0.10:9153     ClusterIP      1 => 10.0.1.3:9153 (active)        
	 4    10.104.0.225:3000   ClusterIP      1 => 10.0.0.206:3000 (active)      
	 5    10.99.79.38:9090    ClusterIP      1 => 10.0.0.139:9090 (active)      
	 6    10.109.199.10:443   ClusterIP      1 => 192.168.56.12:4244 (active)   
	                                         2 => 192.168.56.11:4244 (active)   
	 7    10.98.55.232:80     ClusterIP      1 => 10.0.0.64:80 (active)         
	                                         2 => 10.0.0.62:80 (active)         
	 8    10.98.55.232:69     ClusterIP      1 => 10.0.0.64:69 (active)         
	                                         2 => 10.0.0.62:69 (active)         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zhbj5 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6        IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                       
	 357        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                           ready   
	                                                            reserved:host                                                                                                
	 3072       Disabled           Disabled          4          reserved:health                                                              fd02::17c   10.0.1.42   ready   
	 3125       Disabled           Disabled          34588      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::10f   10.0.1.3    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                              
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                  
	                                                            k8s:k8s-app=kube-dns                                                                                         
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
16:54:54 STEP: Running AfterEach for block EntireTestsuite K8sAgentPolicyTest Basic Test
16:54:54 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|7ffb142b_K8sAgentPolicyTest_Basic_Test_TLS_policy.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next//814/artifact/7ffb142b_K8sAgentPolicyTest_Basic_Test_TLS_policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next//814/artifact/e06f2e18_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next//814/artifact/test_results_Cilium-PR-K8s-1.25-kernel-net-next_814_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next/814/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Nov 28, 2022
@aanm
Copy link
Member

aanm commented Nov 28, 2022

Fixed by #22403

@aanm aanm closed this as completed Nov 28, 2022
@chancez
Copy link
Contributor

chancez commented Dec 7, 2022

What if we added --retry-all-errors to the curl?

@jrajahalme
Copy link
Member

Added Envoy trace level logging to this test (via #22646). Please attach sysdumps from fails like this here (on PRs that have been created or rebased after now) so that I can take a look.

@jrajahalme
Copy link
Member

Thanks for the data, on a quick look there should be logs to go by.

Quarantine PR for this flake: #22684

@pchaigno pchaigno added the pinned These issues are not marked stale by our issue bot. label Jan 25, 2023
@pchaigno
Copy link
Member

The CI dashboard seems to suggest this may not be flaky anymore: https://datastudio.google.com/s/pDDgQ5MC4dA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! pinned These issues are not marked stale by our issue bot.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants