Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully #24253

Closed
maintainer-s-little-helper bot opened this issue Mar 8, 2023 · 3 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Found 1 k8s-app=cilium logs matching list of errors that must be investigated:

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415
Found 1 k8s-app=cilium logs matching list of errors that must be investigated:
2023-03-06T15:59:31.085396110Z level=error msg="endpoint regeneration failed" containerID=95bb62a7af datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1265 error="Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" identity=30980 ipv4=10.0.0.225 ipv6=10.0.0.225 k8sPodName=kube-system/coredns-6b775575b5-kpbgm subsys=endpoint
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️  Found "2023-03-06T15:59:31.085396110Z level=error msg=\"endpoint regeneration failed\" containerID=95bb62a7af datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1265 error=\"Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled\" identity=30980 ipv4=10.0.0.225 ipv6=10.0.0.225 k8sPodName=kube-system/coredns-6b775575b5-kpbgm subsys=endpoint" in logs 1 times
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 1
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Disabling socket-LB tracing as it requires kernel 5.7 or newer
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
endpoint regeneration failed
Regeneration of endpoint failed
Cilium pods: [cilium-6s6xv cilium-kfcjg]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
test-k8s2-85f4dbf6cb-sdjcj   false     false
testclient-6jwdj             false     false
testclient-8jwq7             false     false
testds-9ntkk                 false     false
testds-vmvz6                 false     false
coredns-6b775575b5-tq6pj     false     false
Cilium agent 'cilium-6s6xv': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0
Cilium agent 'cilium-kfcjg': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0


Standard Error

Click to show.
15:59:00 STEP: Installing Cilium
15:59:02 STEP: Waiting for Cilium to become ready
15:59:21 STEP: Validating if Kubernetes DNS is deployed
15:59:21 STEP: Checking if deployment is ready
15:59:21 STEP: Checking if kube-dns service is plumbed correctly
15:59:21 STEP: Checking if DNS can resolve
15:59:21 STEP: Checking if pods have identity
15:59:25 STEP: Kubernetes DNS is not ready: %!s(<nil>)
15:59:25 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
15:59:25 STEP: Waiting for Kubernetes DNS to become operational
15:59:25 STEP: Checking if deployment is ready
15:59:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:59:26 STEP: Checking if deployment is ready
15:59:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:59:27 STEP: Checking if deployment is ready
15:59:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:59:28 STEP: Checking if deployment is ready
15:59:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:59:29 STEP: Checking if deployment is ready
15:59:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:59:30 STEP: Checking if deployment is ready
15:59:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:59:31 STEP: Checking if deployment is ready
15:59:31 STEP: Checking if kube-dns service is plumbed correctly
15:59:31 STEP: Checking if pods have identity
15:59:31 STEP: Checking if DNS can resolve
15:59:35 STEP: Validating Cilium Installation
15:59:35 STEP: Performing Cilium controllers preflight check
15:59:35 STEP: Performing Cilium health check
15:59:35 STEP: Performing Cilium status preflight check
15:59:35 STEP: Checking whether host EP regenerated
15:59:42 STEP: Performing Cilium service preflight check
15:59:42 STEP: Performing K8s service preflight check
15:59:48 STEP: Waiting for cilium-operator to be ready
15:59:48 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
15:59:48 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
15:59:48 STEP: Making sure all endpoints are in ready state
15:59:51 STEP: Creating namespace 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera
15:59:51 STEP: Deploying demo_ds.yaml in namespace 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera
15:59:52 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
15:59:59 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
15:59:59 STEP: WaitforNPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
15:59:59 STEP: WaitforNPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
15:59:59 STEP: Checking pod connectivity between nodes
15:59:59 STEP: WaitforPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
15:59:59 STEP: WaitforPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
15:59:59 STEP: WaitforPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
15:59:59 STEP: WaitforPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
16:00:04 STEP: Test iptables masquerading
16:00:04 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
16:00:07 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
16:00:07 STEP: WaitforNPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
16:00:07 STEP: WaitforNPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
16:00:07 STEP: Making ten curl requests from "testclient-6jwdj" to "http://google.com"
16:00:08 STEP: Making ten curl requests from "testclient-8jwq7" to "http://google.com"
16:00:09 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
16:00:13 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
16:00:13 STEP: WaitforNPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
16:00:13 STEP: WaitforNPods(namespace="202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
16:00:13 STEP: Making ten curl requests from "testclient-6jwdj" to "http://google.com"
16:00:14 STEP: Making ten curl requests from "testclient-8jwq7" to "http://google.com"
=== Test Finished at 2023-03-06T16:00:16Z====
16:00:16 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
FAIL: Found 1 k8s-app=cilium logs matching list of errors that must be investigated:
2023-03-06T15:59:31.085396110Z level=error msg="endpoint regeneration failed" containerID=95bb62a7af datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1265 error="Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" identity=30980 ipv4=10.0.0.225 ipv6=10.0.0.225 k8sPodName=kube-system/coredns-6b775575b5-kpbgm subsys=endpoint
===================== TEST FAILED =====================
16:00:16 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-85f4dbf6cb-sdjcj         2/2     Running   0          30s   10.0.1.201      k8s2   <none>           <none>
	 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-6jwdj                   1/1     Running   0          30s   10.0.1.211      k8s2   <none>           <none>
	 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-8jwq7                   1/1     Running   0          30s   10.0.0.226      k8s1   <none>           <none>
	 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-9ntkk                       2/2     Running   0          30s   10.0.1.25       k8s2   <none>           <none>
	 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-vmvz6                       2/2     Running   0          30s   10.0.0.82       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-84476dcf4b-vfdfl           0/1     Running   0          20m   10.0.0.1        k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-7dbb447479-87lcj        1/1     Running   0          20m   10.0.0.39       k8s1   <none>           <none>
	 kube-system                                                       cilium-6s6xv                       1/1     Running   0          79s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-kfcjg                       1/1     Running   0          79s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-557fc99d88-4tllf   1/1     Running   0          79s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-557fc99d88-m692h   1/1     Running   0          79s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-6b775575b5-tq6pj           1/1     Running   0          56s   10.0.1.181      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          26m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          26m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          26m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-fpxsd                   1/1     Running   0          21m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-p5sqd                   1/1     Running   0          25m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          26m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-qsxll                 1/1     Running   0          20m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-z2j9c                 1/1     Running   0          20m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-54lfz               1/1     Running   0          21m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-nnmn5               1/1     Running   0          21m   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-6s6xv cilium-kfcjg]
cmd: kubectl exec -n kube-system cilium-6s6xv -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.24 (v1.24.4) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe13:8e64, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.90 (v1.13.90-11900ae7)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       28/28 healthy
	 Proxy Status:            OK, ip 10.0.0.160, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 979/65535 (1.49%), Flows/s: 11.25   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2023-03-06T15:59:42Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6s6xv -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 84         Disabled           Disabled          4          reserved:health                                                                                                                  fd02::47   10.0.0.129   ready   
	 1082       Enabled            Disabled          5692       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::b0   10.0.0.82    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1535       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 3178       Disabled           Disabled          1813       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::2    10.0.0.226   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kfcjg -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.24 (v1.24.4) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe48:20a9, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.90 (v1.13.90-11900ae7)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       36/36 healthy
	 Proxy Status:            OK, ip 10.0.1.117, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 1138/65535 (1.74%), Flows/s: 13.58   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2023-03-06T15:59:48Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kfcjg -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 303        Enabled            Disabled          5692       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::11b   10.0.1.25    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 1207       Disabled           Disabled          30980      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::1fd   10.0.1.181   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 1512       Disabled           Disabled          16275      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::17f   10.0.1.201   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 1634       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 1837       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1a3   10.0.1.11    ready   
	 2398       Disabled           Disabled          1813       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::136   10.0.1.211   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
16:00:36 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
16:00:36 STEP: Deleting deployment demo_ds.yaml
16:00:37 STEP: Deleting namespace 202303061559k8sdatapathconfigencapsulationcheckiptablesmasquera
16:00:50 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b0ede8f5_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1112/artifact/b0ede8f5_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1112/artifact/test_results_Cilium-PR-K8s-1.24-kernel-5.4_1112_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4/1112/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Mar 8, 2023
@joestringer
Copy link
Member

"Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled"

@github-actions
Copy link

github-actions bot commented May 8, 2023

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label May 8, 2023
@github-actions
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

1 participant