-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-1739: pods: deleteLogicalPort should not fail when ls is gone #1290
OCPBUGS-1739: pods: deleteLogicalPort should not fail when ls is gone #1290
Conversation
deleteLogicalPort should not fail when its logical switch is already gone. This is needed when handling situations where node has been removed from cluster, but a completed pod remained present after ovnkube master restarts. ovn-org/ovn-kubernetes#3168 Conflicts: go-controller/pkg/ovn/pods_test.go Closes #3168: ovnkube fails to restart after node deletion Reported-at: https://issues.redhat.com/browse/OCPBUGS-1568 Signed-off-by: Flavio Fernandes <flaviof@redhat.com> (cherry picked from commit b328345)
@flavio-fernandes: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@flavio-fernandes: This pull request references Jira Issue OCPBUGS-1739, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest-required |
1 similar comment
/retest-required |
/retest |
hmm do we need to look into:
|
I am LGTM on the fix itself, slightly concerned with the "timed out waiting for OVS port binding" seen in the CI that shouldn't be happening here.... |
Oct 05 20:27:33.000 W ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-229.us-west-1.compute.internal reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-1_openshift-monitoring_b7c2d67b-0011-43a6-a5f3-0c46bce6fa29_0(fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca): error adding pod openshift-monitoring_alertmanager-main-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/alertmanager-main-1/b7c2d67b-0011-43a6-a5f3-0c46bce6fa29:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-1 fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca] [openshift-monitoring/alertmanager-main-1 fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:83:00:20 [10.131.0.32/23]\n' seems like something went wrong with port binding, happens much later...: master finished at: Controller started much later! LOL, we shouldn't be reporting ready if controller wasn't ready. Anyways seems like this PR is good and doesn't have anything to do with the CI error. |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: flavio-fernandes, trozet, tssurya The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/label backport-risk-assessed |
/jira refresh |
@flavio-fernandes: This pull request references Jira Issue OCPBUGS-1739, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/jira refresh |
@flavio-fernandes: This pull request references Jira Issue OCPBUGS-1739, which is valid. 6 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/label cherry-pick-approved |
@flavio-fernandes: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/test e2e-aws-ovn-shared-to-local-gateway-mode-migration |
/test e2e-aws-ovn-upgrade-local-gateway |
@flavio-fernandes: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-1739 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
deleteLogicalPort should not fail when its logical switch is already gone. This is needed when handling situations where node has been removed from cluster, but a completed pod remained present after ovnkube master restarts.
ovn-org/ovn-kubernetes#3168
Conflicts:
go-controller/pkg/ovn/pods_test.go
Closes #3168: ovnkube fails to restart after node deletion Reported-at: https://issues.redhat.com/browse/OCPBUGS-1568
Signed-off-by: Flavio Fernandes flaviof@redhat.com
(cherry picked from commit b328345)