New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1954302: Remove OVS daemonsets #1076
Bug 1954302: Remove OVS daemonsets #1076
Conversation
@jluhrsen: This pull request references Bugzilla bug 1954302, which is valid. The bug has been updated to refer to the pull request using the external bug tracker. 6 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@jluhrsen: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In 4.6 we moved OVS to systemd, we needed to have a pod capable of running OVS during the upgrade, but after that it became unnecessary. In 4.7 we're guaranteed to launch ovs in systemd, both when installed from scratch and when upgrading from 4.6, therefore there is no use for the OVS pod other than being able to gather the logs from the apiserver, we cannot even restart the OVS daemons by deleting the pod, wihch is confusing.
ced9581
to
614e6e1
Compare
/retest |
1 similar comment
/retest |
@abhat , one check still failing with random test cases. can you lgtm? |
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhat, jluhrsen The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
6 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
/override ci/prow/e2e-metal-ipi-ovn-ipv6 We keep flaking on infrastructure problems (image pull failures, etc.) and have previously passed that job with this PR. |
@jluhrsen: Some pull requests linked via external trackers have merged: The following pull requests linked via external trackers have not merged:
These pull request must merge or be unlinked from the Bugzilla bug in order for it to move to the next state. Once unlinked, request a bug refresh with Bugzilla bug 1954302 has not been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@knobunc: Overrode contexts on behalf of knobunc: ci/prow/e2e-metal-ipi-ovn-ipv6 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In 4.6 we moved OVS to systemd, we needed to have a pod capable of
running OVS during the upgrade, but after that it became unnecessary.
In 4.7 we're guaranteed to launch ovs in systemd, both when installed
from scratch and when upgrading from 4.6, therefore there is no use for
the OVS pod other than being able to gather the logs from the apiserver,
we cannot even restart the OVS daemons by deleting the pod, wihch is
confusing.