Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove NoSchedule taints for control-planes as well as masters #1173

Merged

Conversation

iholder101
Copy link
Contributor

@iholder101 iholder101 commented Apr 17, 2024

What this PR does / why we need it:
Removes the NoSchedule taints for control-planes as well as masters.

Special notes for your reviewer:
This PR kubevirt/kubevirt#11659 is requiring Kubevirt's infra components to run on CP nodes by default. However, the pull-kubevirt-e2e-kind-sriov lane is failing since its single CP node has a NoShedule taint:

"taints": [
    {
        "key": "node-role.kubernetes.io/control-plane",
        "effect": "NoSchedule"
    }
]

After this PR this taint should not exist.

A PR to test these changes: kubevirt/kubevirt#11730.

Release note:

Remove NoSchedule taints for control-planes as well as masters

Signed-off-by: Itamar Holder <iholder@redhat.com>
@iholder101
Copy link
Contributor Author

/cc @brianmcarey @dhiller

@oshoval
Copy link
Contributor

oshoval commented Apr 17, 2024

/cc @EdDev @ormergi

can you use the kind non sriov lane instead ? (there is one) - however according PR desc, sriov needs it as well
did you check it with sriov lane on kubevirt with all the e2e tests ?
Please note this lane is maintained by sig-network so sig-network should be in the loop

There was a try in the past to do something similiar that broke sriov
2e998c2
but it is not the same

@iholder101
Copy link
Contributor Author

/cc @EdDev @ormergi

can you use the kind non sriov lane instead ? (there is one) - however according PR desc, sriov needs it as well

Not sure I understand the question. pull-kubevirt-e2e-kind-sriov is a gating presubmit lane that blocks kubevirt/kubevirt#11659.

Is there a reason to keep these taints for this lane?

did you check it with sriov lane on kubevirt with all the e2e tests ?

Do you mean locally?
Is there a way to test it via CI?

Please note this lane is maintained by sig-network so sig-network should be in the loop

Thanks for pinging them!

@oshoval
Copy link
Contributor

oshoval commented Apr 17, 2024

The change looks fine
kubevirtci runs all the sriov tests beside 2 (the ones that need cpu management)
I don't think this change will affect them, we can take the chance and revert if needed
unless you want to spin a cluster locally on a machine that has SRIOV and test it at kubevirt repo

You can also create a PR that bump it into kubevirt even before the offical bump and test it on kubevirt
just copy the file into a kubevirt and thats it
running this on kubevirt and creating a dummy PR will do
curl -L https://github.com/kubevirt/kubevirtci/pull/1173.patch | git am

It reminded me a change that is breaking sriov
2e998c2
but this is not the same, so it should be fine

Thanks

@oshoval
Copy link
Contributor

oshoval commented Apr 17, 2024

vgpu lane failure not related right ?
(also uses same kind file)

@zhlhahaha
FYI please about this PR, as it affects ARM lane
Thanks

@iholder101
Copy link
Contributor Author

You can also create a PR that bump it into kubevirt even before the offical bump and test it on kubevirt just copy the file into a kubevirt and thats it running this on kubevirt and creating a dummy PR will do curl -L https://github.com/kubevirt/kubevirtci/pull/1173.patch | git am

Wow man, that's really cool! Thanks!
Here's the test PR: kubevirt/kubevirt#11730. I'll also add it to the PR description

@iholder101
Copy link
Contributor Author

/test check-up-kind-1.27-vgpu

@zhlhahaha
Copy link
Contributor

vgpu lane failure not related right ? (also uses same kind file)

@zhlhahaha FYI please about this PR, as it affects ARM lane Thanks

Hi, @oshoval thank for reminding!
/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Apr 18, 2024
@iholder101
Copy link
Contributor Author

kind tests have passed here: kubevirt/kubevirt#11730.
Anything else missing? @dhiller @brianmcarey

Copy link
Member

@brianmcarey brianmcarey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - thanks @iholder101

/approve

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: brianmcarey

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 18, 2024
@oshoval
Copy link
Contributor

oshoval commented Apr 18, 2024

it is already lgtmed
please just make sure the vgpu flake happens even without this PR and then it is fine (i believe vgpu flake isn't related, but best double check if possible please)

@brianmcarey
Copy link
Member

brianmcarey commented Apr 18, 2024

st make sure the vgpu flake happens even without this PR and then it is fine (i believe vgpu flake isn't related, but best double check if possible please)

Yeah the vgpu lane has issues here that are unrelated to this PR. Will try to take a look at it in the next few days.

@kubevirt-bot
Copy link
Contributor

kubevirt-bot commented Apr 18, 2024

@iholder101: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
check-up-kind-1.27-vgpu 8b6b623 link false /test check-up-kind-1.27-vgpu

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Copy link
Contributor

@ormergi ormergi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@oshoval
Copy link
Contributor

oshoval commented Apr 18, 2024

seems kubevirtci publish flakes lately
https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/logs/publish-kubevirtci/1780863280890253312

this is why i think provision manager thinks vm based providers should run,
because it compares to latest release (which is what it should indeed, but if there were releases the diff was smaller)

@kubevirt-bot kubevirt-bot merged commit 1c58677 into kubevirt:main Apr 18, 2024
9 of 10 checks passed
kubevirt-bot added a commit to kubevirt-bot/kubevirt that referenced this pull request Apr 23, 2024
[8f203b9 Automatic bump of CentOS Stream to latest](kubevirt/kubevirtci#1181)
[5d854e0 centos9: Add --numa support to vm.sh](kubevirt/kubevirtci#1174)
[73b90a3 Run bazelisk run //robots/cmd/kubevirtci-bumper:kubevirtci-bumper -- -ensure-last-three-minor-of v1 --k8s-provider-dir /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirtci/cluster-provision/k8s](kubevirt/kubevirtci#1175)
[1c58677 Remove NoSchedule taints for control-planes as well as masters](kubevirt/kubevirtci#1173)
[9d50baf Automatic bump of CentOS Stream to latest](kubevirt/kubevirtci#1168)
[ef69592 Bump alpine to 3.19](kubevirt/kubevirtci#1167)
[4c6d7c7 Add AAQ Operator as an optional cluster addon](kubevirt/kubevirtci#1170)
[d69cd4e Automatic bump of CentOS Stream to latest](kubevirt/kubevirtci#1166)
[62457f2 Automatic bump of CentOS Stream to latest](kubevirt/kubevirtci#1162)
[85d8a72 centos: Update centos stream base to 20240318.0](kubevirt/kubevirtci#1161)
[970d172 Bump github.com/docker/docker in /cluster-provision/gocli](kubevirt/kubevirtci#1159)

```release-note
NONE
```

Signed-off-by: kubevirt-bot <kubevirtbot@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. size/XS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants