Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Conformance Ginkgo - K8sDatapathConfig Check BPF masquerading with ip-masq-agent [It] DirectRouting #26191

Closed
giorio94 opened this issue Jun 13, 2023 · 2 comments
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@giorio94
Copy link
Member

CI failure

Hit on #26154: https://github.com/cilium/cilium/actions/runs/5256751015/jobs/9498618245
Matrix entry: (1.27, f09-datapath-misc-2)
Test results: test_results-E2E Test (1.27, f09-datapath-misc-2).tar.gz

K8sDatapathConfig Check BPF masquerading with ip-masq-agent 
  DirectRouting
  /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:515
14:35:20 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig Check BPF masquerading with ip-masq-agent
14:35:20 STEP: WaitforPods(namespace="default", filter="-l name=echoserver-hostnetns")
14:35:24 STEP: WaitforPods(namespace="default", filter="-l name=echoserver-hostnetns") => <nil>
14:35:24 STEP: Installing Cilium
14:35:29 STEP: Waiting for Cilium to become ready
14:36:02 STEP: Validating if Kubernetes DNS is deployed
14:36:02 STEP: Checking if deployment is ready
14:36:03 STEP: Kubernetes DNS is not ready: only 0 of 2 replicas are available
14:36:03 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
14:36:05 STEP: Waiting for Kubernetes DNS to become operational
14:36:05 STEP: Checking if deployment is ready
14:36:06 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:06 STEP: Checking if deployment is ready
14:36:08 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:08 STEP: Checking if deployment is ready
14:36:09 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:09 STEP: Checking if deployment is ready
14:36:10 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:10 STEP: Checking if deployment is ready
14:36:11 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:11 STEP: Checking if deployment is ready
14:36:11 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:11 STEP: Checking if deployment is ready
14:36:12 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:12 STEP: Checking if deployment is ready
14:36:14 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:14 STEP: Checking if deployment is ready
14:36:14 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:14 STEP: Checking if deployment is ready
14:36:15 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:15 STEP: Checking if deployment is ready
14:36:17 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:17 STEP: Checking if deployment is ready
14:36:18 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:18 STEP: Checking if deployment is ready
14:36:19 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:19 STEP: Checking if deployment is ready
14:36:19 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:19 STEP: Checking if deployment is ready
14:36:20 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:20 STEP: Checking if deployment is ready
14:36:21 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:21 STEP: Checking if deployment is ready
14:36:22 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:22 STEP: Checking if deployment is ready
14:36:23 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:23 STEP: Checking if deployment is ready
14:36:24 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:24 STEP: Checking if deployment is ready
14:36:25 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
14:36:25 STEP: Checking if deployment is ready
14:36:26 STEP: Checking if kube-dns service is plumbed correctly
14:36:26 STEP: Checking if pods have identity
14:36:26 STEP: Checking if DNS can resolve
14:36:34 STEP: Validating Cilium Installation
14:36:34 STEP: Performing Cilium health check
14:36:34 STEP: Performing Cilium status preflight check
14:36:34 STEP: Checking whether host EP regenerated
14:36:34 STEP: Performing Cilium controllers preflight check
14:36:47 STEP: Performing Cilium service preflight check
14:36:47 STEP: Performing K8s service preflight check
14:36:50 STEP: Waiting for cilium-operator to be ready
14:36:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:36:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:36:50 STEP: Making sure all endpoints are in ready state
14:36:53 STEP: Creating namespace 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag
14:36:54 STEP: Deploying demo_ds.yaml in namespace 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag
14:36:55 STEP: Applying policy /host/test/k8s/manifests/l3-policy-demo.yaml
14:37:07 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:37:07 STEP: WaitforNPods(namespace="202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag", filter="")
14:37:35 STEP: WaitforNPods(namespace="202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag", filter="") => <nil>
14:37:35 STEP: Making ten curl requests from "testclient-7wwlz" to "[http://172.18.0.2:80](http://172.18.0.2/)"
14:37:37 STEP: Making ten curl requests from "testclient-cnszp" to "[http://172.18.0.2:80](http://172.18.0.2/)"
FAIL: Pod "testclient-cnszp" can not connect to "[http://172.18.0.2:80](http://172.18.0.2/)"
Expected command: kubectl exec -n 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag testclient-cnszp -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 [http://172.18.0.2:80](http://172.18.0.2/) -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 | grep client_address= 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 command terminated with exit code 52
	 

=== Test Finished at 2023-06-13T14:37:47Z====
14:37:47 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:37:48 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
	 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   test-k8s2-85df6cf7dc-cmhj4                   2/2     Running   0          75s     10.0.1.185   kind-worker          <none>           <none>
	 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testclient-7wwlz                             1/1     Running   0          75s     10.0.1.112   kind-worker          <none>           <none>
	 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testclient-cnszp                             1/1     Running   0          75s     10.0.0.160   kind-control-plane   <none>           <none>
	 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testds-2sgdj                                 2/2     Running   0          75s     10.0.1.179   kind-worker          <none>           <none>
	 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testds-cjjkz                                 2/2     Running   0          75s     10.0.0.64    kind-control-plane   <none>           <none>
	 cilium-monitoring                                                 grafana-758c69b6df-82lwm                     1/1     Running   0          2m52s   10.0.1.24    kind-worker          <none>           <none>
	 cilium-monitoring                                                 prometheus-5bc5cbbf9d-6n58g                  1/1     Running   0          2m52s   10.0.1.48    kind-worker          <none>           <none>
	 default                                                           echoserver-nkcdt                             1/1     Running   0          2m50s   172.18.0.2   kind-worker2         <none>           <none>
	 kube-system                                                       cilium-4ndrx                                 1/1     Running   0          2m41s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       cilium-node-init-2qvnj                       1/1     Running   0          2m41s   172.18.0.2   kind-worker2         <none>           <none>
	 kube-system                                                       cilium-node-init-jz8pv                       1/1     Running   0          2m41s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system                                                       cilium-node-init-qfqjx                       1/1     Running   0          2m41s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       cilium-operator-d869477bb-nq5vm              1/1     Running   0          2m41s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system                                                       cilium-operator-d869477bb-qgwnj              1/1     Running   0          2m41s   172.18.0.2   kind-worker2         <none>           <none>
	 kube-system                                                       cilium-zx9gr                                 1/1     Running   0          2m41s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system                                                       coredns-5d78c9869d-bm8t6                     1/1     Running   0          2m4s    10.0.0.198   kind-control-plane   <none>           <none>
	 kube-system                                                       coredns-5d78c9869d-mp299                     1/1     Running   0          2m4s    10.0.0.30    kind-control-plane   <none>           <none>
	 kube-system                                                       etcd-kind-control-plane                      1/1     Running   0          3m42s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       kube-apiserver-kind-control-plane            1/1     Running   0          3m41s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       kube-controller-manager-kind-control-plane   1/1     Running   0          3m41s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       kube-scheduler-kind-control-plane            1/1     Running   0          3m41s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       log-gatherer-2249c                           1/1     Running   0          3m14s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system                                                       log-gatherer-8s88g                           1/1     Running   0          3m14s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system                                                       log-gatherer-r7vbc                           1/1     Running   0          3m14s   172.18.0.2   kind-worker2         <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-4ndrx cilium-zx9gr]
cmd: kubectl exec -n kube-system cilium-4ndrx -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.27 (v1.27.1) [linux/amd64]
	 Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [eth0 172.18.0.3 fc00:c111::3 (Direct Routing)]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-93f2d89e)
	 NodeMonitor:             Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            BPF
	 Masquerading:            BPF (ip-masq-agent)   [eth0]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       36/36 healthy
	 Proxy Status:            OK, ip 10.0.0.89, 0 redirects active on ports 10000-20000, Envoy: embedded
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1705/65535 (2.60%), Flows/s: 10.67   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-06-13T14:37:30Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-4ndrx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 261        Disabled           Disabled          17285      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::42   10.0.0.64    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 476        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::61   10.0.0.152   ready   
	 1076       Disabled           Disabled          22233      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::b    10.0.0.198   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 1286       Disabled           Disabled          41081      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::8d   10.0.0.160   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1451       Disabled           Disabled          22233      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::47   10.0.0.30    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 3941       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zx9gr -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.27 (v1.27.1) [linux/amd64]
	 Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [eth0 172.18.0.4 fc00:c111::4 (Direct Routing)]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-93f2d89e)
	 NodeMonitor:             Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            BPF
	 Masquerading:            BPF (ip-masq-agent)   [eth0]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.1.161, 0 redirects active on ports 10000-20000, Envoy: embedded
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1087/65535 (1.66%), Flows/s: 5.78   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-06-13T14:37:31Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zx9gr -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 84         Disabled           Disabled          13453      k8s:app=prometheus                                                                                                               fd02::11c   10.0.1.48    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                 
	 339        Disabled           Disabled          36627      k8s:app=grafana                                                                                                                  fd02::1cd   10.0.1.24    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                 
	 586        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 626        Disabled           Disabled          41207      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::106   10.0.1.185   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 678        Disabled           Disabled          41081      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::119   10.0.1.112   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 987        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1ab   10.0.1.55    ready   
	 2143       Disabled           Disabled          17285      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::1a4   10.0.1.179   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:38:22 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:38:22 STEP: Deleting deployment demo_ds.yaml
14:38:23 STEP: Deleting namespace 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag
14:38:39 STEP: Running AfterEach for block EntireTestsuite
<Checks>
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 13
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Key allocation attempt failed
UpdateIdentities: Skipping Delete of a non-existing identity
Unable to ensure that BPF JIT compilation is enabled. This can be ignored when Cilium is running inside non-host network namespace (e.g. with kind or minikube)
CONFIG_LWTUNNEL_BPF optional kernel parameter is not in kernel (needed for: Lightweight Tunnel hook for IP-in-IP encapsulation)
Unable to get node resource
Cilium pods: [cilium-4ndrx cilium-zx9gr]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-2sgdj                  false     false
testds-cjjkz                  false     false
prometheus-5bc5cbbf9d-6n58g   false     false
test-k8s2-85df6cf7dc-cmhj4    false     false
testclient-cnszp              false     false
coredns-5d78c9869d-bm8t6      false     false
coredns-5d78c9869d-mp299      false     false
testclient-7wwlz              false     false
grafana-758c69b6df-82lwm      false     false
Cilium agent 'cilium-4ndrx': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0
Cilium agent 'cilium-zx9gr': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0

</Checks>


• Failure [199.043 seconds]
K8sDatapathConfig
/home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:461
  Check BPF masquerading with ip-masq-agent
  /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:461
    DirectRouting [It]
    /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:515

    Pod "testclient-cnszp" can not connect to "[http://172.18.0.2:80](http://172.18.0.2/)"
    Expected command: kubectl exec -n 202306131436k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag testclient-cnszp -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 [http://172.18.0.2:80](http://172.18.0.2/) -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 | grep client_address= 
    To succeed, but it failed:
    Exitcode: 1 
    Err: exit status 1
    Stdout:
     	 
    Stderr:
     	 command terminated with exit code 52
    	 
    

    /home/runner/work/cilium/cilium/test/k8s/datapath_configuration.go:334
@giorio94 giorio94 added area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! labels Jun 13, 2023
@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Aug 13, 2023
@github-actions
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

1 participant