Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1820472: Not delete namespace object when cleanup not rended objects #641

Merged
merged 1 commit into from May 26, 2020

Conversation

pliurh
Copy link
Contributor

@pliurh pliurh commented May 15, 2020

Namespace deletion cannot be done while openshift-apiserver is down. It will be stuck in 'Terminating' state forever, and that can block the sdn restore. So we choose not to delete the namespace by the cleanup function but leave it to the user to manually remove the namespace after the cluster is back to a normal state.

@pliurh pliurh changed the title Not delete namespace object when cleanup not rended objects Bug 1820472: Not delete namespace object when cleanup not rended objects May 15, 2020
@openshift-ci-robot openshift-ci-robot added bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels May 15, 2020
@openshift-ci-robot
Copy link
Contributor

@pliurh: This pull request references Bugzilla bug 1820472, which is valid. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.5.0) matches configured target release for branch (4.5.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

Bug 1820472: Not delete namespace object when cleanup not rended objects

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pliurh
Copy link
Contributor Author

pliurh commented May 15, 2020

/test verify

@pliurh
Copy link
Contributor Author

pliurh commented May 15, 2020

@squeed @alexanderConstantinescu PTAL.

@pliurh
Copy link
Contributor Author

pliurh commented May 15, 2020

/retest

@alexanderConstantinescu
Copy link
Contributor

@squeed @alexanderConstantinescu PTAL.

I am not sure if one can chose to go from "multus deployment -> non multus deployment", which would have the multus namespace linger around with this patch.

@squeed, any input on that?

@pliurh
Copy link
Contributor Author

pliurh commented May 18, 2020

/retest

@squeed
Copy link
Contributor

squeed commented May 18, 2020

@squeed @alexanderConstantinescu PTAL.

I am not sure if one can chose to go from "multus deployment -> non multus deployment", which would have the multus namespace linger around with this patch.

@squeed, any input on that?

That's correct - disabling multus after-the-fact is not supported.

I think this is a better approach; could you add a quick comment in the code as to why this was added, then I'll lgtm it?

@pliurh pliurh force-pushed the not-delete-namespace branch 2 times, most recently from 40cc71c to 0c5ce3b Compare May 18, 2020 13:42
@pliurh
Copy link
Contributor Author

pliurh commented May 18, 2020

@squeed PTAL

@@ -85,6 +85,12 @@ func (status *StatusManager) deleteRelatedObjectsNotRendered(co *configv1.Cluste
status.relatedObjects = append(status.relatedObjects, currentObj)
continue
}
if gvk.Kind == "Namespace" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check that gvk.Group == "core" too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the namespace gvk object, the group is empty string, so I change use gvk.Group == "" here.

Namespace deletion cannot be done while openshift-apiserver is down. It will be
stuck in 'Terminating' state forever, and that can block the sdn restore. So we
choose not delete the namespace by the cleanup function, but leave it to the user
to manually remove the namesapce, after the cluster is back to normal state.
@pliurh
Copy link
Contributor Author

pliurh commented May 21, 2020

/retest

@pliurh
Copy link
Contributor Author

pliurh commented May 21, 2020

/test e2e-windows-hybrid-network

@pliurh
Copy link
Contributor Author

pliurh commented May 21, 2020

/test e2e-metal-ipi

1 similar comment
@pliurh
Copy link
Contributor Author

pliurh commented May 25, 2020

/test e2e-metal-ipi

@squeed
Copy link
Contributor

squeed commented May 26, 2020

/lgtm
/approve
/retest

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 26, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pliurh, squeed

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 26, 2020
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

3 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented May 26, 2020

@pliurh: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-metal-ipi 1cf0fbb link /test e2e-metal-ipi

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit d476ea8 into openshift:master May 26, 2020
@openshift-ci-robot
Copy link
Contributor

@pliurh: Some pull requests linked via external trackers have merged: . The following pull requests linked via external trackers have not merged:

In response to this:

Bug 1820472: Not delete namespace object when cleanup not rended objects

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants