Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One conformance test fails when using cilium as cni #935

Open
rancher-max opened this issue Apr 29, 2021 · 2 comments
Open

One conformance test fails when using cilium as cni #935

rancher-max opened this issue Apr 29, 2021 · 2 comments
Labels
kind/bug Something isn't working kind/upstream-issue This issue appears to be caused by an upstream bug
Milestone

Comments

@rancher-max
Copy link
Contributor

rancher-max commented Apr 29, 2021

Environmental Info:
RKE2 Version:

v1.20.6

Node(s) CPU architecture, OS, and Version:

Ubuntu

Cluster Configuration:

3 servers 1 agent

Describe the bug:

Conformance test failing: [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]

Steps To Reproduce:

  • Install rke2 with cilium
  • Run e2e conformance tests
$ sonobuoy run --e2e-focus="validates that there is no conflict between pods with same hostPort but different hostIP and protocol" --kube-conformance-image-version=v1.20.6 --kubeconfig=/path/to/kubeconfig

# Until not 'Running' or 'Pending'
$ sonobuoy status --kubeconfig=/path/to/kubeconfig

# Calling this command's result <result>
$ sonobuoy retrieve --kubeconfig=/path/to/kubeconfig

# Can see logs in ./plugins/e2e/results/global/e2e.log after running below command
$ tar xzf <result>

$ sonobuoy results <result>

Using sonobuoy version 0.20.0

Expected behavior:

all conformance tests should pass

Actual behavior:

This specific test fails every time

Additional context / logs:

Potentially relevant conformance logs:

[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-13a14d5b-c507-48d5-8617-0dda86b100a1 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.31.14.101 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.31.14.101 but use UDP protocol on the node which pod2 resides
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321
Apr 30 00:08:15.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.31.14.101 http://127.0.0.1:54321/hostname] Namespace:sched-pred-9126 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 30 00:08:15.426: INFO: >>> kubeConfig: /tmp/kubeconfig-113081924
Apr 30 00:08:20.705: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54321
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321
Apr 30 00:08:20.705: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.31.14.101 http://127.0.0.1:54321/hostname] Namespace:sched-pred-9126 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 30 00:08:20.705: INFO: >>> kubeConfig: /tmp/kubeconfig-113081924
Apr 30 00:08:25.845: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54321
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321
Apr 30 00:08:25.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.31.14.101 http://127.0.0.1:54321/hostname] Namespace:sched-pred-9126 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 30 00:08:25.845: INFO: >>> kubeConfig: /tmp/kubeconfig-113081924
Apr 30 00:08:30.928: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54321
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321
Apr 30 00:08:30.928: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.31.14.101 http://127.0.0.1:54321/hostname] Namespace:sched-pred-9126 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 30 00:08:30.928: INFO: >>> kubeConfig: /tmp/kubeconfig-113081924
Apr 30 00:08:36.007: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54321
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321
Apr 30 00:08:36.007: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.31.14.101 http://127.0.0.1:54321/hostname] Namespace:sched-pred-9126 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 30 00:08:36.007: INFO: >>> kubeConfig: /tmp/kubeconfig-113081924
Apr 30 00:08:41.089: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54321
Apr 30 00:08:41.090: FAIL: Failed to connect to exposed host ports
@rancher-max rancher-max added the kind/bug Something isn't working label Apr 29, 2021
@rancher-max rancher-max added this to the v1.21.1+rke2r1 milestone Apr 30, 2021
@rancher-max
Copy link
Contributor Author

This is noticed as an issue upstream: cilium/cilium#14287

@brandond brandond added the kind/upstream-issue This issue appears to be caused by an upstream bug label May 1, 2021
@rancher-max
Copy link
Contributor Author

Still seeing this on v1.24.1-rc1+rke2r1 due to the same upstream bug it seems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working kind/upstream-issue This issue appears to be caused by an upstream bug
Projects
None yet
Development

No branches or pull requests

4 participants