Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to disable reconciliation for a single resource, or an entire namespace #795

Closed
sthomson-wyn opened this issue Mar 29, 2023 · 11 comments
Labels
question Further information is requested

Comments

@sthomson-wyn
Copy link

Describe your question

I know that we can disable reconciliation for all resources in a cluster by simply scaling down the config connector pods.

But is there a way to disable reconciliation for a single resource, or a single namespace? For context, we split up the config connector resources with 1 namespace per project.

The use-case here would be disabling reconciliation in a single project for emergency manual intervention without needing to stop reconciliation accross all projects that are managed by config connector.

@sthomson-wyn sthomson-wyn added the question Further information is requested label Mar 29, 2023
@diviner524
Copy link
Collaborator

@sthomson-wyn We just added a new feature in the latest 1.102.0 release. I believe you can set the reconcile interval to 0 for selected resources, which should achieve the same purpose. You can give it a try and let us know if it works.

https://cloud.google.com/config-connector/docs/concepts/reconciliation#configuring_the_reconciliation_interval

@sthomson-wyn
Copy link
Author

@diviner524 Awesome, I'll give it a shot and close this if it works as expected. Thanks!

@sthomson-wyn
Copy link
Author

@diviner524 unfortunately that does not work. I've set it to 0, changed the labels on a bucket & a configuration setting, and config connector has reconciled the bucket within a few hours

@diviner524
Copy link
Collaborator

@sthomson-wyn have you made changes to the bucket resource in the K8s cluster? Setting the value to 0 does not disable reconciliation completely, it only disables the periodic polling. Reconciliation will still be triggered if changes are made to the corresponding KRM resource.

@sthomson-wyn
Copy link
Author

@diviner524 I did not make any changes to the resource manually, and argocd is recording the last sync as over 1 day ago, so no change there. The "generation" field of the resource hasn't changed either, so I don't think that a change was made to the corresponding KRM resource

@diviner524
Copy link
Collaborator

@sthomson-wyn I see, thanks for confirming.

  • Are you able to find container logs showing the controller tried to reconcile the resource:

https://cloud.google.com/config-connector/docs/troubleshooting#check-controller-logs

  • Also if you can provide us with steps to reproduce this issue, we can look into it and see if this is a bug that needs to be fixed.

@sthomson-wyn
Copy link
Author

I will create a repro & get the logs

@sthomson-wyn
Copy link
Author

@diviner524 I believe I've found the culprit. It reconciles after restarting the cnrm-controller-manager-0 pod. So that annotation is only respected for the lifetime of the pod

@sthomson-wyn
Copy link
Author

For posterity, a repro:

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    cnrm.cloud.google.com/project-id: my-proj-id
  name: my-proj-id
---
apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucket
metadata:
  annotations:
    cnrm.cloud.google.com/reconcile-interval-in-seconds: "0"
  labels:
    test: hello
  name: test-bucket
  namespace: my-proj-id
spec:
  location: us
  publicAccessPrevention: enforced
  resourceID: test-bucket-my-proj-id
  storageClass: STANDARD
  uniformBucketLevelAccess: true

After the resource is created & reconciled, go into the UI and modify the labels (or any setting)

After waiting for any amount of time & observing no reconciliation (as desired), restart the cnrm-controller-manager-0 pod. After the pod goes through the CRs, the tag on the bucket will match what is in the cluster

@diviner524
Copy link
Collaborator

Thanks @sthomson-wyn for sharing the details! Yes we are able to reproduce the same. We will work on a fix.

@diviner524
Copy link
Collaborator

This is fixed in v1.104.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants