Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathConfig Host firewall With VXLAN and endpoint routes #24966

Closed
maintainer-s-little-helper bot opened this issue Apr 19, 2023 · 2 comments
Closed
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathConfig Host firewall With VXLAN and endpoint routes

Failure Output

FAIL: Failed to reach 10.0.1.253:80 from testclient-host-67qmt

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Failed to reach 10.0.1.253:80 from testclient-host-67qmt
Expected command: kubectl exec -n 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro testclient-host-67qmt -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.0.1.253:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

/usr/local/go/src/runtime/asm_amd64.s:1598

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-p2flx cilium-zm8lt]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
coredns-6d97d5ddb-d4264      false     false
testclient-d96pm             false     false
testclient-tgq7t             false     false
testserver-ctst7             false     false
testserver-kx4zf             false     false
grafana-67ff49cd99-hflq4     false     false
prometheus-8c7df94b4-l4hkd   false     false
Cilium agent 'cilium-p2flx': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0
Cilium agent 'cilium-zm8lt': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0


Standard Error

Click to show.
01:26:32 STEP: Installing Cilium
01:26:35 STEP: Waiting for Cilium to become ready
01:26:52 STEP: Validating if Kubernetes DNS is deployed
01:26:52 STEP: Checking if deployment is ready
01:26:52 STEP: Checking if kube-dns service is plumbed correctly
01:26:52 STEP: Checking if DNS can resolve
01:26:52 STEP: Checking if pods have identity
01:26:54 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-zm8lt: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-zm8lt
01:26:56 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-zm8lt: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-zm8lt
01:26:58 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-zm8lt: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-zm8lt
01:27:00 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-zm8lt: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-zm8lt
01:27:02 STEP: Kubernetes DNS is not ready: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-zm8lt
01:27:02 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
01:27:02 STEP: Waiting for Kubernetes DNS to become operational
01:27:02 STEP: Checking if deployment is ready
01:27:02 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:03 STEP: Checking if deployment is ready
01:27:03 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:04 STEP: Checking if deployment is ready
01:27:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:05 STEP: Checking if deployment is ready
01:27:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:06 STEP: Checking if deployment is ready
01:27:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:07 STEP: Checking if deployment is ready
01:27:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:08 STEP: Checking if deployment is ready
01:27:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:09 STEP: Checking if deployment is ready
01:27:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:10 STEP: Checking if deployment is ready
01:27:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:11 STEP: Checking if deployment is ready
01:27:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:12 STEP: Checking if deployment is ready
01:27:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:27:13 STEP: Checking if deployment is ready
01:27:13 STEP: Checking if kube-dns service is plumbed correctly
01:27:13 STEP: Checking if DNS can resolve
01:27:13 STEP: Checking if pods have identity
01:27:17 STEP: Validating Cilium Installation
01:27:17 STEP: Performing Cilium controllers preflight check
01:27:17 STEP: Performing Cilium status preflight check
01:27:17 STEP: Performing Cilium health check
01:27:17 STEP: Checking whether host EP regenerated
01:27:24 STEP: Performing Cilium service preflight check
01:27:24 STEP: Performing K8s service preflight check
01:27:24 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-p2flx': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

01:27:24 STEP: Performing Cilium status preflight check
01:27:24 STEP: Performing Cilium health check
01:27:24 STEP: Checking whether host EP regenerated
01:27:24 STEP: Performing Cilium controllers preflight check
01:27:32 STEP: Performing Cilium service preflight check
01:27:32 STEP: Performing K8s service preflight check
01:27:38 STEP: Waiting for cilium-operator to be ready
01:27:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
01:27:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
01:27:38 STEP: Making sure all endpoints are in ready state
01:27:41 STEP: Creating namespace 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro
01:27:41 STEP: Deploying demo_hostfw.yaml in namespace 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro
01:27:41 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
01:27:41 STEP: WaitforNPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="")
01:27:45 STEP: WaitforNPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="") => <nil>
01:27:45 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml
01:28:06 STEP: Checking host policies on egress to remote node
01:28:06 STEP: Checking host policies on egress to local pod
01:28:06 STEP: Checking host policies on egress to remote pod
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
01:28:06 STEP: Checking host policies on ingress from remote pod
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
01:28:06 STEP: Checking host policies on ingress from local pod
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
01:28:06 STEP: Checking host policies on ingress from remote node
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
01:28:06 STEP: WaitforPods(namespace="202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
FAIL: Failed to reach 10.0.1.253:80 from testclient-host-67qmt
Expected command: kubectl exec -n 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro testclient-host-67qmt -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.0.1.253:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Failed to reach 192.168.56.11:80 from testserver-host-n7kx6
Expected command: kubectl exec -n 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro testserver-host-n7kx6 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.11:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "web" out of: web, udp
	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Failed to reach 10.0.0.23:80 from testclient-host-67qmt
Expected command: kubectl exec -n 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro testclient-host-67qmt -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.0.0.23:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Failed to reach 192.168.56.11:80 from testclient-host-67qmt
Expected command: kubectl exec -n 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro testclient-host-67qmt -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.11:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

=== Test Finished at 2023-04-19T01:28:41Z====
01:28:41 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
01:28:41 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                             READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-d96pm                 1/1     Running   0          65s     10.0.0.188      k8s1   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-67qmt            1/1     Running   0          65s     192.168.56.12   k8s2   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-znn6p            1/1     Running   0          65s     192.168.56.11   k8s1   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-tgq7t                 1/1     Running   0          65s     10.0.1.182      k8s2   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-ctst7                 2/2     Running   0          65s     10.0.0.23       k8s1   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-n7kx6            2/2     Running   0          65s     192.168.56.12   k8s2   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-zgl7p            2/2     Running   0          65s     192.168.56.11   k8s1   <none>           <none>
	 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-kx4zf                 2/2     Running   0          65s     10.0.1.253      k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-67ff49cd99-hflq4         1/1     Running   0          28m     10.0.0.216      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-8c7df94b4-l4hkd       1/1     Running   0          28m     10.0.0.179      k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-8fb79dc5-bblgj   1/1     Running   0          2m11s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-8fb79dc5-wbp7x   1/1     Running   0          2m11s   192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       cilium-p2flx                     1/1     Running   0          2m11s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-zm8lt                     1/1     Running   0          2m11s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-6d97d5ddb-d4264          1/1     Running   0          104s    10.0.0.146      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                        1/1     Running   0          41m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1              1/1     Running   0          41m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1     1/1     Running   0          41m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1              1/1     Running   0          41m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-5zr7t               1/1     Running   0          28m     192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       log-gatherer-99sm8               1/1     Running   0          28m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-hw4lm               1/1     Running   0          28m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-bxq7r             1/1     Running   0          29m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-ljb9k             1/1     Running   0          29m     192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-wtphz             1/1     Running   0          29m     192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-p2flx cilium-zm8lt]
cmd: kubectl exec -n kube-system cilium-p2flx -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe6d:61e2, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.13.90 (v1.13.90-9bf3add2)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       29/29 healthy
	 Proxy Status:            OK, ip 10.0.1.223, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 593/65535 (0.90%), Flows/s: 4.67   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-04-19T01:27:31Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-p2flx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 566        Disabled           Disabled          20720      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::15d   10.0.1.182   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testClient                                                                                                                                             
	 858        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1d8   10.0.1.129   ready   
	 873        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            k8s:status=lockdown                                                                                                                                               
	                                                            reserved:host                                                                                                                                                     
	 1129       Disabled           Disabled          40538      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::154   10.0.1.253   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testServer                                                                                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zm8lt -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fef2:40d1, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.13.90 (v1.13.90-9bf3add2)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       43/43 healthy
	 Proxy Status:            OK, ip 10.0.0.125, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1347/65535 (2.06%), Flows/s: 12.04   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          1/2 reachable   (2023-04-19T01:28:09Z)
	   Name                   IP              Node      Endpoints
	   k8s1 (localhost)       192.168.56.11   unknown   unreachable
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zm8lt -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 185        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            k8s:status=lockdown                                                                                                                                              
	                                                            reserved:host                                                                                                                                                    
	 475        Disabled           Disabled          20720      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::b8   10.0.0.188   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                  
	                                                            k8s:test=hostfw                                                                                                                                                  
	                                                            k8s:zgroup=testClient                                                                                                                                            
	 1238       Disabled           Disabled          11780      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::36   10.0.0.146   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 2063       Disabled           Disabled          34782      k8s:app=prometheus                                                                                                               fd02::a3   10.0.0.179   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2950       Disabled           Disabled          17241      k8s:app=grafana                                                                                                                  fd02::23   10.0.0.216   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3458       Disabled           Disabled          40538      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::a4   10.0.0.23    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                  
	                                                            k8s:test=hostfw                                                                                                                                                  
	                                                            k8s:zgroup=testServer                                                                                                                                            
	 3748       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::45   10.0.0.212   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
01:28:54 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
01:28:54 STEP: Deleting deployment demo_hostfw.yaml
01:28:55 STEP: Deleting namespace 202304190127k8sdatapathconfighostfirewallwithvxlanandendpointro
01:29:10 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|31699f78_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//1887/artifact/31699f78_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//1887/artifact/test_results_Cilium-PR-K8s-1.26-kernel-net-next_1887_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/1887/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@michi-covalent
Copy link
Contributor

is this a duplicate of #15455?

@joestringer
Copy link
Member

duplicate

@joestringer joestringer closed this as not planned Won't fix, can't repro, duplicate, stale May 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

2 participants