Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-1739: pods: deleteLogicalPort should not fail when ls is gone #1290

Conversation

flavio-fernandes
Copy link
Contributor

deleteLogicalPort should not fail when its logical switch is already gone. This is needed when handling situations where node has been removed from cluster, but a completed pod remained present after ovnkube master restarts.

ovn-org/ovn-kubernetes#3168

Conflicts:
go-controller/pkg/ovn/pods_test.go

Closes #3168: ovnkube fails to restart after node deletion Reported-at: https://issues.redhat.com/browse/OCPBUGS-1568
Signed-off-by: Flavio Fernandes flaviof@redhat.com
(cherry picked from commit b328345)

deleteLogicalPort should not fail when its logical switch is already
gone. This is needed when handling situations where node has been
removed from cluster, but a completed pod remained present after
ovnkube master restarts.

ovn-org/ovn-kubernetes#3168

Conflicts:
  go-controller/pkg/ovn/pods_test.go

Closes #3168: ovnkube fails to restart after node deletion
Reported-at: https://issues.redhat.com/browse/OCPBUGS-1568
Signed-off-by: Flavio Fernandes <flaviof@redhat.com>
(cherry picked from commit b328345)
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 5, 2022

@flavio-fernandes: No Bugzilla bug is referenced in the title of this pull request.
To reference a bug, add 'Bug XXX:' to the title of this pull request and request another bug refresh with /bugzilla refresh.

In response to this:

OCPBUGS-1739: pods: deleteLogicalPort should not fail when ls is gone

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. label Oct 5, 2022
@openshift-ci-robot
Copy link
Contributor

@flavio-fernandes: This pull request references Jira Issue OCPBUGS-1739, which is invalid:

  • expected the bug to target the "4.11.z" version, but no target version was set
  • expected Jira Issue OCPBUGS-1739 to depend on a bug targeting a version in 4.12.0 and in one of the following states: VERIFIED, RELEASE PENDING, CLOSED (ERRATA), CLOSED (CURRENT RELEASE), CLOSED (DONE), but no dependents were found

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

deleteLogicalPort should not fail when its logical switch is already gone. This is needed when handling situations where node has been removed from cluster, but a completed pod remained present after ovnkube master restarts.

ovn-org/ovn-kubernetes#3168

Conflicts:
go-controller/pkg/ovn/pods_test.go

Closes #3168: ovnkube fails to restart after node deletion Reported-at: https://issues.redhat.com/browse/OCPBUGS-1568
Signed-off-by: Flavio Fernandes flaviof@redhat.com
(cherry picked from commit b328345)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Oct 5, 2022
@flavio-fernandes
Copy link
Contributor Author

/retest-required

1 similar comment
@flavio-fernandes
Copy link
Contributor Author

/retest-required

@tssurya
Copy link
Contributor

tssurya commented Oct 10, 2022

/retest

@tssurya
Copy link
Contributor

tssurya commented Oct 10, 2022

hmm do we need to look into:

ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-229.us-west-1.compute.internal - 755.35 seconds after deletion - reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-1_openshift-monitoring_b7c2d67b-0011-43a6-a5f3-0c46bce6fa29_0(fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca): error adding pod openshift-monitoring_alertmanager-main-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/alertmanager-main-1/b7c2d67b-0011-43a6-a5f3-0c46bce6fa29:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-1 fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca] [openshift-monitoring/alertmanager-main-1 fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:83:00:20 [10.131.0.32/23]
'

https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_ovn-kubernetes/1290/pull-ci-openshift-ovn-kubernetes-release-4.11-4.11-upgrade-from-stable-4.10-local-gateway-e2e-aws-ovn-upgrade/1577723083010084864 ?

@tssurya
Copy link
Contributor

tssurya commented Oct 10, 2022

I am LGTM on the fix itself, slightly concerned with the "timed out waiting for OVS port binding" seen in the CI that shouldn't be happening here....
If the retest passes I can attach the label.

@tssurya
Copy link
Contributor

tssurya commented Oct 10, 2022

Oct 05 20:27:33.000 W ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-229.us-west-1.compute.internal reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-1_openshift-monitoring_b7c2d67b-0011-43a6-a5f3-0c46bce6fa29_0(fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca): error adding pod openshift-monitoring_alertmanager-main-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/alertmanager-main-1/b7c2d67b-0011-43a6-a5f3-0c46bce6fa29:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-1 fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca] [openshift-monitoring/alertmanager-main-1 fddcb0010ac32bba652b1d161c37f7536aa24c6a61fd94a40e3fc5c1dd5a3dca] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:83:00:20 [10.131.0.32/23]\n'

seems like something went wrong with port binding, happens much later...:
2022-10-05T20:36:03.591Z|00123|binding|INFO|Claiming lport openshift-monitoring_alertmanager-main-1 for this chassis.
2022-10-05T20:36:03.591Z|00124|binding|INFO|openshift-monitoring_alertmanager-main-1: Claiming 0a:58:0a:83:00:19 10.131.0.25
2022-10-05T20:36:03.631Z|00125|binding|INFO|Setting lport openshift-monitoring_alertmanager-main-1 ovn-installed in OVS
2022-10-05T20:36:03.631Z|00126|binding|INFO|Setting lport openshift-monitoring_alertmanager-main-1 up in Southbound

master finished at:
I1005 21:28:53.815684 1 pods.go:409] [openshift-monitoring/alertmanager-main-1] addLogicalPort took 1.107452ms, libovsdb time 601.406µs, annotation time: 0s

Controller started much later!
2022-10-05T20:35:19+00:00 - starting ovn-controller
2022-10-05T20:35:19Z|00001|vlog|INFO|opened log file /var/log/ovn/acl-audit-log.log
2022-10-05T20:35:19.478Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
2022-10-05T20:35:19.478Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
2022-10-05T20:35:19.481Z|00004|main|INFO|OVN internal version is : [22.06.1-20.23.0-63.4]
2022-10-05T20:35:19.481Z|00005|main|INFO|OVS IDL reconnected, force recompute.
2022-10-05T20:35:19.487Z|00006|reconnect|INFO|ssl:10.0.252.194:9642: connecting...

LOL, we shouldn't be reporting ready if controller wasn't ready. Anyways seems like this PR is good and doesn't have anything to do with the CI error.

@tssurya
Copy link
Contributor

tssurya commented Oct 10, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 10, 2022
@trozet
Copy link
Contributor

trozet commented Oct 11, 2022

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 11, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: flavio-fernandes, trozet, tssurya

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 11, 2022
@trozet
Copy link
Contributor

trozet commented Oct 11, 2022

/label backport-risk-assessed

@openshift-ci openshift-ci bot added the backport-risk-assessed Indicates a PR to a release branch has been evaluated and considered safe to accept. label Oct 11, 2022
@flavio-fernandes
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot
Copy link
Contributor

@flavio-fernandes: This pull request references Jira Issue OCPBUGS-1739, which is invalid:

  • expected the bug to target the "4.11.z" version, but no target version was set
  • expected Jira Issue OCPBUGS-1739 to depend on a bug targeting a version in 4.12.0 and in one of the following states: VERIFIED, RELEASE PENDING, CLOSED (ERRATA), CLOSED (CURRENT RELEASE), CLOSED (DONE), but no dependents were found

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@flavio-fernandes
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Oct 20, 2022
@openshift-ci-robot
Copy link
Contributor

@flavio-fernandes: This pull request references Jira Issue OCPBUGS-1739, which is valid.

6 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.11.z) matches configured target version for branch (4.11.z)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)
  • dependent bug Jira Issue OCPBUGS-1862 is in the state Verified, which is one of the valid states (VERIFIED, RELEASE PENDING, CLOSED (ERRATA), CLOSED (CURRENT RELEASE), CLOSED (DONE))
  • dependent Jira Issue OCPBUGS-1862 targets the "4.12.0" version, which is one of the valid target versions: 4.12.0
  • bug has dependents

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@anuragthehatter
Copy link

/label cherry-pick-approved

@openshift-ci openshift-ci bot added the cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. label Oct 20, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2022

@flavio-fernandes: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-hypershift 944ab93 link false /test e2e-hypershift

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@flavio-fernandes
Copy link
Contributor Author

/test e2e-aws-ovn-shared-to-local-gateway-mode-migration

@flavio-fernandes
Copy link
Contributor Author

/test e2e-aws-ovn-upgrade-local-gateway

@openshift-merge-robot openshift-merge-robot merged commit af13058 into openshift:release-4.11 Oct 20, 2022
@openshift-ci-robot
Copy link
Contributor

@flavio-fernandes: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-1739 has been moved to the MODIFIED state.

In response to this:

deleteLogicalPort should not fail when its logical switch is already gone. This is needed when handling situations where node has been removed from cluster, but a completed pod remained present after ovnkube master restarts.

ovn-org/ovn-kubernetes#3168

Conflicts:
go-controller/pkg/ovn/pods_test.go

Closes #3168: ovnkube fails to restart after node deletion Reported-at: https://issues.redhat.com/browse/OCPBUGS-1568
Signed-off-by: Flavio Fernandes flaviof@redhat.com
(cherry picked from commit b328345)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@flavio-fernandes flavio-fernandes deleted the deleteLogicalPort_noLs_4.11 branch October 20, 2022 22:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. backport-risk-assessed Indicates a PR to a release branch has been evaluated and considered safe to accept. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants