Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deletion of CRs (service and binding) ends with an unstable state for the CRs and the IBM Cloud Operator #265

Closed
thomassuedbroecker opened this issue Feb 24, 2022 · 3 comments

Comments

@thomassuedbroecker
Copy link

thomassuedbroecker commented Feb 24, 2022

Hi, today we figured out following:

Our objective was

We wanted to delete instances of the "bindings and service" by removing the CR instances inside a project/namespace, because we would like to delete the services on IBM Cloud and setup new configurations for our application in the cluster.

Steps to reproduce:

  • Step 1: Delete existing CRs (service and binding) of the IBM Cloud Operator in a project or namespace
  • Step 2: Verify it will fail, the CRs will be greyed out but not be deleted.
  • Step 3: All deleted CRs now in an unstable status

Question:

How to avoid this or even better how to fix that unstable status?

Steps to get a stable status again:

We tried to following steps to get a stable status again, but we had no success at the end.

  1. Deletion of the operator development project with the CRs failed (status terminating)
  2. Deletion of IBM Cloud Services has no impact to the IBM Cloud Operator
  3. We noticed that the IBM Cloud Operator restarts itself again and again
  4. Deinstall the IBM Cloud Operator has no impact to the deletion or CRs and the srojects
  5. Restart the master node of the OpenShift cluster
  6. Restart the work nodes of the OpenShift cluster

Conclusion at the moment

Don't delete CRDs for the IBM Cloud Operator and don't delete projects that contains CRDs of the IBM Cloud Operator! The deletion will break the IBM Cloud Operator and you need to install it again and maybe ends in an unstable state for projects at the cluster.

@thomassuedbroecker
Copy link
Author

We fixed it with more memory for our cluster and a reinstallation of the Operator once more and a clean-up in IBM Cloud.

@JohnStarich
Copy link
Member

Hey @thomassuedbroecker, thanks for posting your solution. Was the only issue memory pressure?

Deleting CR's should behave correctly, of course. 🙏

@thomassuedbroecker
Copy link
Author

Hi @JohnStarich, we noticed: you should don't delete too fast too many IBM Cloud Operator CRDs , maybe that results in an unstable state and can be related to Writing a Kubernetes Operator: the Hard Parts - Sebastien Guilloux, Elastic .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants