Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathConfig Host firewall With VXLAN and endpoint routes #25342

Closed
maintainer-s-little-helper bot opened this issue May 9, 2023 · 3 comments
Closed
Assignees
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathConfig Host firewall With VXLAN and endpoint routes

Failure Output

FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-fgprl"'s policy revision: cannot get policy revision: ""

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-fgprl"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc000612510>: {
        s: "Cannot retrieve \"cilium-fgprl\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:567

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 3
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
UpdateIdentities: Skipping Delete of a non-existing identity
Key allocation attempt failed
Cilium pods: [cilium-7vl98 cilium-fgprl]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                       Ingress   Egress
testclient-pfx8k          false     false
testclient-rkhvq          false     false
testserver-klmcm          false     false
testserver-zl79w          false     false
coredns-6d97d5ddb-jfksf   false     false
Cilium agent 'cilium-7vl98': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-fgprl': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 34 Failed 0


Standard Error

Click to show.
11:42:42 STEP: Installing Cilium
11:42:44 STEP: Waiting for Cilium to become ready
11:43:00 STEP: Validating if Kubernetes DNS is deployed
11:43:00 STEP: Checking if deployment is ready
11:43:00 STEP: Checking if kube-dns service is plumbed correctly
11:43:00 STEP: Checking if pods have identity
11:43:00 STEP: Checking if DNS can resolve
11:43:05 STEP: Kubernetes DNS is not ready: %!s(<nil>)
11:43:05 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
11:43:05 STEP: Waiting for Kubernetes DNS to become operational
11:43:05 STEP: Checking if deployment is ready
11:43:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:43:06 STEP: Checking if deployment is ready
11:43:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:43:07 STEP: Checking if deployment is ready
11:43:14 STEP: Checking if kube-dns service is plumbed correctly
11:43:14 STEP: Checking if pods have identity
11:43:14 STEP: Checking if DNS can resolve
11:43:17 STEP: Validating Cilium Installation
11:43:17 STEP: Performing Cilium controllers preflight check
11:43:17 STEP: Performing Cilium status preflight check
11:43:17 STEP: Performing Cilium health check
11:43:17 STEP: Checking whether host EP regenerated
11:43:25 STEP: Performing Cilium service preflight check
11:43:25 STEP: Performing K8s service preflight check
11:43:31 STEP: Waiting for cilium-operator to be ready
11:43:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
11:43:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
11:43:31 STEP: Making sure all endpoints are in ready state
11:43:34 STEP: Creating namespace 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro
11:43:34 STEP: Deploying demo_hostfw.yaml in namespace 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro
11:43:35 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
11:43:35 STEP: WaitforNPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="")
11:43:39 STEP: WaitforNPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="") => <nil>
11:43:39 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml
11:43:56 STEP: Checking host policies on ingress from local pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
11:43:56 STEP: Checking host policies on egress to local pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: Checking host policies on ingress from remote pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
11:43:56 STEP: Checking host policies on ingress from remote node
11:43:56 STEP: Checking host policies on egress to remote node
11:43:56 STEP: Checking host policies on egress to remote pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-fgprl"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc000612510>: {
        s: "Cannot retrieve \"cilium-fgprl\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
=== Test Finished at 2023-05-09T11:44:14Z====
11:44:14 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
11:44:15 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-crdjn              1/1     Running   0          54s    192.168.56.12   k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-xwc2b              1/1     Running   0          54s    192.168.56.11   k8s1   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-pfx8k                   1/1     Running   0          55s    10.0.0.19       k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-rkhvq                   1/1     Running   0          55s    10.0.1.168      k8s1   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-6gszv              2/2     Running   0          55s    192.168.56.11   k8s1   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-89269              2/2     Running   0          55s    192.168.56.12   k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-klmcm                   2/2     Running   0          55s    10.0.0.95       k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-zl79w                   2/2     Running   0          55s    10.0.1.117      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-67ff49cd99-6vjgl           0/1     Running   0          45m    10.0.0.108      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-8c7df94b4-t2df2         1/1     Running   0          45m    10.0.0.113      k8s1   <none>           <none>
	 kube-system                                                       cilium-7vl98                       1/1     Running   0          105s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-fgprl                       1/1     Running   0          105s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7bc6974595-px8cr   1/1     Running   0          105s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7bc6974595-stspt   1/1     Running   0          105s   192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       coredns-6d97d5ddb-jfksf            1/1     Running   0          84s    10.0.0.62       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-gxlpz                 1/1     Running   0          45m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-n2rkv                 1/1     Running   0          45m    192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       log-gatherer-xxzgr                 1/1     Running   0          45m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-7fgns               1/1     Running   0          46m    192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-tnnnx               1/1     Running   0          46m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-trlbd               1/1     Running   0          46m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-7vl98 cilium-fgprl]
cmd: kubectl exec -n kube-system cilium-7vl98 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fead:5b39, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-361e634c)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.152, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1160/65535 (1.77%), Flows/s: 12.21   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-05-09T11:43:24Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-7vl98 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 707        Disabled           Disabled          64110      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::1cb   10.0.1.168   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testClient                                                                                                                                             
	 783        Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            k8s:status=lockdown                                                                                                                                               
	                                                            reserved:host                                                                                                                                                     
	 977        Disabled           Disabled          6549       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::1ab   10.0.1.117   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testServer                                                                                                                                             
	 3650       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::146   10.0.1.110   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-fgprl -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe8d:46df, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-361e634c)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       34/34 healthy
	 Proxy Status:            OK, ip 10.0.0.124, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1087/65535 (1.66%), Flows/s: 11.47   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          1/2 reachable   (2023-05-09T11:44:12Z)
	   Name                   IP              Node      Endpoints
	   k8s1                   192.168.56.11   unknown   unreachable
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-fgprl -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                          
	 133        Disabled           Disabled          6549       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::3b   10.0.0.95   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                        
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                 
	                                                            k8s:test=hostfw                                                                                                                                                 
	                                                            k8s:zgroup=testServer                                                                                                                                           
	 434        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::99   10.0.0.60   ready   
	 593        Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s2                                                                                                                              ready   
	                                                            k8s:status=lockdown                                                                                                                                             
	                                                            reserved:host                                                                                                                                                   
	 1906       Disabled           Disabled          3831       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::b    10.0.0.62   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                        
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                     
	                                                            k8s:k8s-app=kube-dns                                                                                                                                            
	 2479       Disabled           Disabled          64110      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::11   10.0.0.19   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                        
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                 
	                                                            k8s:test=hostfw                                                                                                                                                 
	                                                            k8s:zgroup=testClient                                                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:45:01 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
11:45:01 STEP: Deleting deployment demo_hostfw.yaml
11:45:01 STEP: Deleting namespace 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro
11:45:17 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|024b6c05_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/024b6c05_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/913c5175_K8sDatapathServicesTest_Checks_N-S_loadbalancing_With_host_policy_Tests_NodePort.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/bf448f31_K8sDatapathConfig_Host_firewall_With_native_routing_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/c1880546_K8sDatapathConfig_Host_firewall_With_native_routing.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/test_results_Cilium-PR-K8s-1.26-kernel-net-next_2154_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/2154/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label May 9, 2023
@tommyp1ckles tommyp1ckles self-assigned this May 10, 2023
tommyp1ckles added a commit to tommyp1ckles/cilium that referenced this issue May 11, 2023
* Allow ICMP/ICMPv6 traffic on all endpoints/nodes.
* Allow connections required by kube-dns and kind deployments, in order to reduce clutter in drops file when debugging.

In local tests, this cleans up the `cilium-health status` output:

```
Probe time:   2023-05-11T01:42:25Z
Nodes:
  kind-kind/kind-worker (localhost):
    Host connectivity to 172.18.0.3:
      ICMP to stack:   OK, RTT=533.239µs
      HTTP to agent:   OK, RTT=150.976µs
    Endpoint connectivity to 10.244.1.216:
      ICMP to stack:   OK, RTT=608.257µs
      HTTP to agent:   OK, RTT=242.148µs
  kind-kind/kind-control-plane:
    Host connectivity to 172.18.0.2:
      ICMP to stack:   OK, RTT=582.786µs
      HTTP to agent:   OK, RTT=203.352µs
    Endpoint connectivity to 10.244.0.88:
      ICMP to stack:   OK, RTT=557.07µs
      HTTP to agent:   OK, RTT=460.655µs
```

Fixes: cilium#25344 cilium#25343 cilium#25342

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
@maintainer-s-little-helper
Copy link
Author

PR #25348 hit this flake with 88.76% similarity:

Click to show.

Test Name

K8sDatapathConfig Host firewall With VXLAN and endpoint routes

Failure Output

FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-klrh9"'s policy revision: cannot get policy revision: ""

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-klrh9"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc00125a6c0>: {
        s: "Cannot retrieve \"cilium-klrh9\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-5.4/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:903

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-klrh9 cilium-m7dlb]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-gj49p              false     false
testclient-xk6xf              false     false
testserver-75t8v              false     false
testserver-fqxjv              false     false
grafana-5747bcc8f9-d5rzx      false     false
prometheus-655fb888d7-l6s6p   false     false
coredns-69b675786c-hcv2v      false     false
Cilium agent 'cilium-klrh9': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0
Cilium agent 'cilium-m7dlb': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0


Standard Error

Click to show.
02:16:00 STEP: Installing Cilium
02:16:02 STEP: Waiting for Cilium to become ready
02:16:21 STEP: Validating if Kubernetes DNS is deployed
02:16:21 STEP: Checking if deployment is ready
02:16:21 STEP: Checking if kube-dns service is plumbed correctly
02:16:21 STEP: Checking if DNS can resolve
02:16:21 STEP: Checking if pods have identity
02:16:26 STEP: Kubernetes DNS is not ready: 5s timeout expired
02:16:26 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
02:16:27 STEP: Waiting for Kubernetes DNS to become operational
02:16:27 STEP: Checking if deployment is ready
02:16:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
02:16:28 STEP: Checking if deployment is ready
02:16:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
02:16:29 STEP: Checking if deployment is ready
02:16:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
02:16:30 STEP: Checking if deployment is ready
02:16:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
02:16:31 STEP: Checking if deployment is ready
02:16:31 STEP: Checking if kube-dns service is plumbed correctly
02:16:31 STEP: Checking if pods have identity
02:16:31 STEP: Checking if DNS can resolve
02:16:34 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-m7dlb: unable to find service backend 10.0.0.191:53 in datapath of cilium pod cilium-m7dlb
02:16:36 STEP: Validating Cilium Installation
02:16:36 STEP: Performing Cilium controllers preflight check
02:16:36 STEP: Performing Cilium health check
02:16:36 STEP: Checking whether host EP regenerated
02:16:36 STEP: Performing Cilium status preflight check
02:16:45 STEP: Performing Cilium service preflight check
02:16:45 STEP: Performing K8s service preflight check
02:16:45 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-klrh9': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

02:16:45 STEP: Performing Cilium controllers preflight check
02:16:45 STEP: Performing Cilium status preflight check
02:16:45 STEP: Performing Cilium health check
02:16:45 STEP: Checking whether host EP regenerated
02:16:52 STEP: Performing Cilium service preflight check
02:16:52 STEP: Performing K8s service preflight check
02:16:52 STEP: Performing Cilium controllers preflight check
02:16:52 STEP: Performing Cilium health check
02:16:52 STEP: Performing Cilium status preflight check
02:16:52 STEP: Checking whether host EP regenerated
02:17:00 STEP: Performing Cilium service preflight check
02:17:00 STEP: Performing K8s service preflight check
02:17:06 STEP: Waiting for cilium-operator to be ready
02:17:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
02:17:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
02:17:06 STEP: Making sure all endpoints are in ready state
02:17:09 STEP: Creating namespace 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro
02:17:09 STEP: Deploying demo_hostfw.yaml in namespace 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro
02:17:09 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
02:17:09 STEP: WaitforNPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="")
02:17:12 STEP: WaitforNPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="") => <nil>
02:17:12 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml
02:17:22 STEP: Checking host policies on egress to remote pod
02:17:22 STEP: Checking host policies on ingress from local pod
02:17:22 STEP: Checking host policies on egress to local pod
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
02:17:22 STEP: Checking host policies on ingress from remote node
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
02:17:22 STEP: Checking host policies on ingress from remote pod
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
02:17:22 STEP: Checking host policies on egress to remote node
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
02:17:22 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
02:17:23 STEP: WaitforPods(namespace="202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-klrh9"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc00125a6c0>: {
        s: "Cannot retrieve \"cilium-klrh9\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
=== Test Finished at 2023-05-11T02:17:39Z====
02:17:39 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
02:17:39 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-gj49p                   1/1     Running   0          35s    10.0.1.171      k8s2   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-9hdff              1/1     Running   0          35s    192.168.56.11   k8s1   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-br76f              1/1     Running   0          35s    192.168.56.12   k8s2   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-xk6xf                   1/1     Running   0          35s    10.0.0.14       k8s1   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-75t8v                   2/2     Running   0          35s    10.0.0.39       k8s1   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-fqxjv                   2/2     Running   0          35s    10.0.1.6        k8s2   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-6fbk9              2/2     Running   0          35s    192.168.56.12   k8s2   <none>           <none>
	 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-w9n9c              2/2     Running   0          35s    192.168.56.11   k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-5747bcc8f9-d5rzx           1/1     Running   0          21m    10.0.0.197      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-l6s6p        1/1     Running   0          21m    10.0.0.154      k8s1   <none>           <none>
	 kube-system                                                       cilium-klrh9                       1/1     Running   0          102s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-m7dlb                       1/1     Running   0          102s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5966595f58-8m9tp   1/1     Running   0          102s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5966595f58-vzb4j   1/1     Running   0          102s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-69b675786c-hcv2v           1/1     Running   0          77s    10.0.1.95       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          25m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          25m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          25m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-vj6lz                   1/1     Running   0          25m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-xjz8r                   1/1     Running   0          22m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          25m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-cwhfr                 1/1     Running   0          21m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-wxkjq                 1/1     Running   0          21m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-4wm8c               1/1     Running   0          22m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-ggqgs               1/1     Running   0          22m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-klrh9 cilium-m7dlb]
cmd: kubectl exec -n kube-system cilium-klrh9 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.22 (v1.22.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fee6:12be, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.12.9 (v1.12.9-f26818a)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       32/32 healthy
	 Proxy Status:            OK, ip 10.0.1.70, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 520/65535 (0.79%), Flows/s: 5.80   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2023-05-11T02:16:59Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-klrh9 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 45         Disabled           Disabled          1300       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::164   10.0.1.95    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 71         Disabled           Disabled          4          reserved:health                                                                                                                  fd02::196   10.0.1.16    ready   
	 407        Disabled           Disabled          63867      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::18c   10.0.1.171   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testClient                                                                                                                                             
	 910        Disabled           Disabled          9689       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::1c5   10.0.1.6     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testServer                                                                                                                                             
	 1587       Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            k8s:status=lockdown                                                                                                                                               
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-m7dlb -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.22 (v1.22.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:feea:bcaf, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.12.9 (v1.12.9-f26818a)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       38/38 healthy
	 Proxy Status:            OK, ip 10.0.0.151, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 1422/65535 (2.17%), Flows/s: 17.03   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2023-05-11T02:17:06Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-m7dlb -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 54         Disabled           Disabled          63765      k8s:app=prometheus                                                                                                               fd02::16   10.0.0.154   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 414        Disabled           Disabled          9689       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::88   10.0.0.39    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                  
	                                                            k8s:test=hostfw                                                                                                                                                  
	                                                            k8s:zgroup=testServer                                                                                                                                            
	 1215       Disabled           Disabled          56874      k8s:app=grafana                                                                                                                  fd02::a1   10.0.0.197   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1414       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::df   10.0.0.155   ready   
	 1750       Disabled           Disabled          63867      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::87   10.0.0.14    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                  
	                                                            k8s:test=hostfw                                                                                                                                                  
	                                                            k8s:zgroup=testClient                                                                                                                                            
	 2070       Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            k8s:status=lockdown                                                                                                                                              
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
02:17:53 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
02:17:53 STEP: Deleting deployment demo_hostfw.yaml
02:17:54 STEP: Deleting namespace 202305110217k8sdatapathconfighostfirewallwithvxlanandendpointro
02:18:09 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b01d098d_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-5.4//928/artifact/3be83be0_K8sDatapathConfig_Host_firewall_With_native_routing_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-5.4//928/artifact/b01d098d_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-5.4//928/artifact/e64ec2b7_K8sDatapathConfig_Host_firewall_With_native_routing.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-5.4//928/artifact/test_results_Cilium-PR-K8s-1.22-kernel-5.4_928_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-5.4/928/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

tommyp1ckles added a commit to tommyp1ckles/cilium that referenced this issue May 11, 2023
Allow ICMP/ICMPv6 traffic on all nodes.

Fixes: cilium#25344 cilium#25343 cilium#25342

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
tommyp1ckles added a commit that referenced this issue May 12, 2023
Allow ICMP/ICMPv6 traffic on all nodes.

Fixes: #25344 #25343 #25342

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
@tommyp1ckles
Copy link
Contributor

Closing with #25374

@tommyp1ckles
Copy link
Contributor

Reopening, still need backports

@tommyp1ckles tommyp1ckles reopened this May 12, 2023
jibi pushed a commit that referenced this issue May 17, 2023
[ upstream commit 8f7a537 ]

Allow ICMP/ICMPv6 traffic on all nodes.

Fixes: #25344 #25343 #25342

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
Signed-off-by: Gilberto Bertin <jibi@cilium.io>
aditighag pushed a commit that referenced this issue May 19, 2023
[ upstream commit 8f7a537 ]

Allow ICMP/ICMPv6 traffic on all nodes.

Fixes: #25344 #25343 #25342

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
Signed-off-by: Gilberto Bertin <jibi@cilium.io>
pchaigno pushed a commit that referenced this issue Jan 8, 2024
[ upstream commit 8f7a537 ]

Allow ICMP/ICMPv6 traffic on all nodes.

Fixes: #25344 #25343 #25342

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
Signed-off-by: Gilberto Bertin <jibi@cilium.io>
Signed-off-by: Tobias Klauser <tobias@isovalent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

1 participant