Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Conformance Ginkgo - Host firewall With VXLAN and endpoint routes #28775

Closed
giorio94 opened this issue Oct 25, 2023 · 2 comments
Closed

CI: Conformance Ginkgo - Host firewall With VXLAN and endpoint routes #28775

giorio94 opened this issue Oct 25, 2023 · 2 comments
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@giorio94
Copy link
Member

CI failure

Hit on #28767
Link: https://github.com/cilium/cilium/actions/runs/6630315244/job/18011637784
Sysdump: test_results-E2E Test (1.26, f07-datapath-host).tar.gz

Failed during pre-flights:

K8sDatapathConfig Host firewall 
  With VXLAN and endpoint routes
  /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:515
17:33:17 STEP: Installing Cilium
17:33:23 STEP: Waiting for Cilium to become ready
17:33:41 STEP: Validating if Kubernetes DNS is deployed
17:33:41 STEP: Checking if deployment is ready
17:33:42 STEP: Checking if kube-dns service is plumbed correctly
17:33:42 STEP: Checking if pods have identity
17:33:42 STEP: Checking if DNS can resolve
17:33:46 STEP: Kubernetes DNS is not ready: %!s(<nil>)
17:33:46 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
17:33:48 STEP: Waiting for Kubernetes DNS to become operational
17:33:48 STEP: Checking if deployment is ready
17:33:49 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:49 STEP: Checking if deployment is ready
17:33:50 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:50 STEP: Checking if deployment is ready
17:33:51 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:51 STEP: Checking if deployment is ready
17:33:52 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:52 STEP: Checking if deployment is ready
17:33:53 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:53 STEP: Checking if deployment is ready
17:33:54 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:54 STEP: Checking if deployment is ready
17:33:55 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:55 STEP: Checking if deployment is ready
17:33:56 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:56 STEP: Checking if deployment is ready
17:33:57 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:57 STEP: Checking if deployment is ready
17:33:58 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:58 STEP: Checking if deployment is ready
17:33:59 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
17:33:59 STEP: Checking if deployment is ready

===================== Exiting AfterFailed =====================
17:40:32 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
17:40:32 STEP: Running AfterEach for block EntireTestsuite
<Checks>
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 10
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Unable to ensure that BPF JIT compilation is enabled. This can be ignored when Cilium is running inside non-host network namespace (e.g. with kind or minikube)
Deprecated value for --kube-proxy-replacement: strict (use either \
Disabling socket-LB tracing as it requires kernel 5.7 or newer
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Unable to update CiliumNode resource, will retry
Cilium pods: [cilium-gbt9w cilium-klfpd]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
grafana-67ff49cd99-cdlsh     false     false
prometheus-8c7df94b4-nrscz   false     false
coredns-787d4945fb-6crrf     false     false
coredns-787d4945fb-jp9hq     false     false
Cilium agent 'cilium-gbt9w': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-klfpd': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 39 Failed 0

</Checks>


• Failure [435.354 seconds]
K8sDatapathConfig
/home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:461
  Host firewall
  /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:461
    With VXLAN and endpoint routes [It]
    /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:515

    cilium pre-flight checks failed
    Expected
        <*errors.errorString | 0xc00088e090>: 
        Cilium validation failed: 4m0s timeout expired: Last polled error: host EP is not ready: unable to run command 'cilium endpoint list -o jsonpath='{[?(@.status.identity.id==1)].status.state}'' to retrieve state of host endpoint from cilium-gbt9w: Exitcode: 1 
        Err: exit status 1
        Stdout:
         	 
        Stderr:
         	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init)
        	 Error: cannot get endpoint list: Cilium API client timeout exceeded
        	 
        	 command terminated with exit code 1
        	 
        
        {
            s: "Cilium validation failed: 4m0s timeout expired: Last polled error: host EP is not ready: unable to run command 'cilium endpoint list -o jsonpath='{[?(@.status.identity.id==1)].status.state}'' to retrieve state of host endpoint from cilium-gbt9w: Exitcode: 1 \nErr: exit status 1\nStdout:\n \t \nStderr:\n \t Defaulted container \"cilium-agent\" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init)\n\t Error: cannot get endpoint list: Cilium API client timeout exceeded\n\t \n\t command terminated with exit code 1\n\t \n",
        }
    to be nil

    /home/runner/work/cilium/cilium/test/helpers/manifest.go:305
@giorio94 giorio94 added area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! labels Oct 25, 2023
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Dec 25, 2023
Copy link

github-actions bot commented Jan 8, 2024

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

1 participant