Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[*-controller] /metrics endpoint not up2date after multiple updates #4139

Closed
1 task done
muryoutaisuu opened this issue Aug 8, 2023 · 5 comments
Closed
1 task done

Comments

@muryoutaisuu
Copy link

muryoutaisuu commented Aug 8, 2023

Describe the bug

After upgrading multiple Helmcharts and Kustomizations today, the /metrics endpoints of multiple controllers are not reporting correct state of items.

A restart of the controller pods will reset the state to actual state observed in cluster. Still we'd rather like it to automatically sync to correct state.

Example 1 - helm-controller

/metrics endpoint showing wrong state (not Ready) for alertmanager HR:

## port-forwarding for reaching /metrics endpoint on localhost:8080
> k -n flux-system port-forward pods/helm-controller-74fcd69796-fs2gc 8080:8080 1>/dev/null&

> curl -s localhost:8080/metrics | grep "gotk_reconcile_condition" | grep alertmanager
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Deleted",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="True",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Unknown",type="Ready"} 0

alertmanager HR according to kubectl:

> k get hr -n monitoring-system alertmanager                                                
NAME           AGE     READY   STATUS
alertmanager   2y31d   True    Release reconciliation succeeded

alertmanager HR according to flux:

> flux -n monitoring-system get helmreleases alertmanager                                   
NAME        	REVISION	SUSPENDED	READY	MESSAGE                          
alertmanager	8.15.3  	False    	True 	Release reconciliation succeeded

eventhough alertmanager HR is reporting ready state on both kubectl and flux commands, it is exported as notReady by /metrics on the helm-controller.

In fact, a lot of the HRs are exported as notReady:

# grep only for gotk_reconcile_condition timeseries=1, then filter out the ones in True state (those are ok)
> curl -s localhost:8080/metrics | grep "gotk_reconcile_condition" | grep 1 | grep -v True
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="core-platform",namespace="mdo-gitlab-runner",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="gitlab-runner",namespace="mdo-gitlab-runner",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="grafana-operator",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="kube-state-metrics",namespace="kube-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="metallb",namespace="metallb-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="metrics-server",namespace="kube-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="nginx-ingress-controller-management",namespace="ingress-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="node-exporter",namespace="kube-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="node-problem-detector",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="prometheus",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="prometheus-operator",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="prometheusrules-grafanadashboards",namespace="monitoring-system",status="Deleted",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="schiff-backup",namespace="schiff-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="schiff-ingress",namespace="schiff-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="schiff-metrics-server",namespace="schiff-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="schiff-monitoring",namespace="schiff-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="schiff-policies",namespace="schiff-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="schiff-rbac",namespace="schiff-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="staticnbi",namespace="mdo-gitlab-runner",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="thanos",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="vector-agent",namespace="logging-system",status="Deleted",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="vector-aggregator",namespace="logging-system",status="Deleted",type="Ready"} 1

actual state for all HRs:

> k get hr -A        
NAMESPACE                       NAME                                  AGE     READY   STATUS
backup-system                   velero                                550d    True    Release reconciliation succeeded
cert-manager-system             cert-manager                          159d    True    Release reconciliation succeeded
ingress-system                  nginx-ingress-controller-customer     2y17d   True    Release reconciliation succeeded
ingress-system                  nginx-ingress-controller-management   510d    True    Release reconciliation succeeded
kube-system                     kube-state-metrics                    2y31d   True    Release reconciliation succeeded
kube-system                     metrics-server                        628d    True    Release reconciliation succeeded
kube-system                     node-exporter                         2y31d   True    Release reconciliation succeeded
kube-system                     vsphere-csi                           153d    True    Release reconciliation succeeded
logging-system                  logging-operator                      23h     True    Release reconciliation succeeded
logging-system                  logging-operator-logging              23h     True    Release reconciliation succeeded
mdo-gitlab-runner               core-platform                         118d    False   HelmChart 'mdo-gitlab-runner/mdo-gitlab-runner-core-platform' is not ready
mdo-gitlab-runner               gitlab-runner                         118d    False   HelmChart 'mdo-gitlab-runner/mdo-gitlab-runner-gitlab-runner' is not ready
mdo-gitlab-runner               staticnbi                             118d    False   HelmChart 'mdo-gitlab-runner/mdo-gitlab-runner-staticnbi' is not ready
metallb-system                  metallb                               2y31d   True    Release reconciliation succeeded
monitoring-system               alertmanager                          2y31d   True    Release reconciliation succeeded
monitoring-system               grafana-operator                      546d    True    Release reconciliation succeeded
monitoring-system               node-problem-detector                 159d    True    Release reconciliation succeeded
monitoring-system               prometheus                            502d    True    Release reconciliation succeeded
monitoring-system               prometheus-operator                   2y31d   True    Release reconciliation succeeded
monitoring-system               thanos                                2y31d   True    Release reconciliation succeeded
node-feature-discovery-system   node-feature-discovery                549d    True    Release reconciliation succeeded
policy-system                   kyverno                               549d    True    Release reconciliation succeeded
policy-system                   policy-reporter                       549d    True    Release reconciliation succeeded
rbac-system                     rbac-manager                          2y31d   True    Release reconciliation succeeded
schiff-system                   schiff-backup                         368d    True    Release reconciliation succeeded
schiff-system                   schiff-core                           368d    True    Release reconciliation succeeded
schiff-system                   schiff-ingress                        368d    True    Release reconciliation succeeded
schiff-system                   schiff-loadbalancing                  368d    True    Release reconciliation succeeded
schiff-system                   schiff-logging                        368d    True    Release reconciliation succeeded
schiff-system                   schiff-metrics-server                 368d    True    Release reconciliation succeeded
schiff-system                   schiff-monitoring                     368d    True    Release reconciliation succeeded
schiff-system                   schiff-nfd                            368d    True    Release reconciliation succeeded
schiff-system                   schiff-policies                       368d    True    Release reconciliation succeeded
schiff-system                   schiff-rbac                           368d    True    Release reconciliation succeeded
schiff-system                   schiff-storage                        368d    True    Release reconciliation succeeded
trident-system                  trident-operator                      579d    True    Release reconciliation succeeded

all are in Ready state except for the 3 in mdo-gitlab-runner namespace.

Example 2 - kustomization-controller

/metrics endpoint not showing all known Kustomizations:

## port-forwarding for reaching /metrics endpoint on localhost:8080
> k -n flux-system port-forward pods/kustomize-controller-5c869bc9d9-cw9nl 8080:8080 1>/dev/null& 

## filtering to gotk_reconcile_condition timeseries=1
> curl -s localhost:8080/metrics | grep "gotk_reconcile_condition"  | grep 1        
gotk_reconcile_condition{kind="Kustomization",name="grafana-dashboards",namespace="monitoring-system",status="True",type="Ready"} 1
gotk_reconcile_condition{kind="Kustomization",name="helmrelease-kustomization",namespace="schiff-tenant",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="Kustomization",name="repositories-kustomization",namespace="schiff-tenant",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="Kustomization",name="tenant-entrypoint",namespace="schiff-tenant",status="True",type="Ready"} 1
gotk_reconcile_condition{kind="Kustomization",name="upstream-policies",namespace="schiff-system",status="True",type="Ready"} 1

flux showing known Kustomizations:

> flux get kustomizations -A              
NAMESPACE        	NAME                                 	REVISION            	SUSPENDED	READY	MESSAGE                                                          
monitoring-system	grafana-dashboards                   	0.5.1@sha1:339f883c 	False    	True 	Applied revision: 0.5.1@sha1:339f883c                           	
schiff-system    	components-locations-bn-reftmdc-mdo-1	main@sha1:32e84c7f  	False    	True 	Applied revision: main@sha1:32e84c7f                            	
schiff-system    	components-onboarding-managed        	main@sha1:32e84c7f  	False    	True 	Applied revision: main@sha1:32e84c7f                            	
schiff-system    	upstream-policies                    	v0.0.8@sha1:1a43b0f2	False    	True 	Applied revision: v0.0.8@sha1:1a43b0f2                          	
schiff-tenant    	helmrelease-kustomization            	main@sha1:76805406  	False    	False	GitRepository.source.toolkit.fluxcd.io "mdo-reference" not found	
schiff-tenant    	repositories-kustomization           	main@sha1:76805406  	False    	False	GitRepository.source.toolkit.fluxcd.io "mdo-reference" not found	
schiff-tenant    	tenant-entrypoint                    	main@sha1:76805406  	False    	True 	Applied revision: main@sha1:76805406                        

kubectl kustomizations:

> k get kustomizations.kustomize.toolkit.fluxcd.io -A
NAMESPACE           NAME                                    AGE     READY   STATUS
monitoring-system   grafana-dashboards                      23h     True    Applied revision: 0.5.1@sha1:339f883c3f46df39a722b0951f7fb1ae7faafa0b
schiff-system       components-locations-bn-reftmdc-mdo-1   2y31d   True    Applied revision: main@sha1:32e84c7fc394bf8eb40c5c9f47f6f91139365606
schiff-system       components-onboarding-managed           40d     True    Applied revision: main@sha1:32e84c7fc394bf8eb40c5c9f47f6f91139365606
schiff-system       upstream-policies                       2y31d   True    Applied revision: v0.0.8@sha1:1a43b0f24f5fadb4662148e776e9869e1fa5f968
schiff-tenant       helmrelease-kustomization               556d    False   GitRepository.source.toolkit.fluxcd.io "mdo-reference" not found
schiff-tenant       repositories-kustomization              124d    False   GitRepository.source.toolkit.fluxcd.io "mdo-reference" not found
schiff-tenant       tenant-entrypoint                       28d     True    Applied revision: main@sha1:7680540684e53a47ad2d68d322a4b1e60c3fa784

2 Kustomizations are not reporting on the /metrics endpoint: components-locations-bn-refsa1-cb-lw-1 and components-onboarding-managed.

It's possible that other controllers are affected as well, I have not checked others besides kustomize and helm.

Steps to reproduce

not yet a way to for sure reproduce the behaviour.
Though upgrading multiple HR (up to 10) at once on multiple clusters showed same behaviours on all clusters.

Expected behavior

output from flux get helmreleases -A and /metrics metric gotk_reconcile_condition on the helm-controller should show same amount and state of helmreleases on cluster.
Same goes for kustomizations with corresponding kustomization-controller.

Possible that other controllers are affected as well.

Screenshots and recordings

asciicast

OS / Distro

Ubuntu 22.04.2

Flux version

v2.0.1

Flux check

flux check
► checking prerequisites
✔ Kubernetes 1.25.11 >=1.24.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.35.0
✔ helm-controller-platform: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.35.0
✔ image-automation-controller: deployment ready
► ghcr.io/fluxcd/image-automation-controller:v0.35.0
✔ image-reflector-controller: deployment ready
► ghcr.io/fluxcd/image-reflector-controller:v0.29.1
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v1.0.1
✔ kustomize-controller-platform: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v1.0.1
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v1.0.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v1.0.1
► checking crds
✔ alerts.notification.toolkit.fluxcd.io/v1beta2
✔ buckets.source.toolkit.fluxcd.io/v1beta2
✔ gitrepositories.source.toolkit.fluxcd.io/v1
✔ helmcharts.source.toolkit.fluxcd.io/v1beta2
✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1
✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2
✔ imagepolicies.image.toolkit.fluxcd.io/v1beta2
✔ imagerepositories.image.toolkit.fluxcd.io/v1beta2
✔ imageupdateautomations.image.toolkit.fluxcd.io/v1beta1
✔ kustomizations.kustomize.toolkit.fluxcd.io/v1
✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2
✔ providers.notification.toolkit.fluxcd.io/v1beta2
✔ receivers.notification.toolkit.fluxcd.io/v1
✔ all checks passed

Git provider

No response

Container Registry provider

No response

Additional context

Logs specific to alertmanager HR

> flux logs -n monitoring-system | grep alertmanager
2023-08-07T07:38:36.976Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-receivers' 
2023-08-07T07:38:36.979Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-route' 
2023-08-07T07:38:36.996Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 31.438314ms 
2023-08-07T07:38:37.850Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-receivers' 
2023-08-07T07:38:37.853Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-route' 
2023-08-07T07:38:37.870Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 30.752077ms 
2023-08-07T07:38:39.380Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-receivers' 
2023-08-07T07:38:39.383Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-route' 
2023-08-07T07:38:39.396Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 25.204381ms 
2023-08-07T07:38:43.164Z info HelmRelease/alertmanager.monitoring-system - HelmChart 'schiff-system/monitoring-system-alertmanager' is not ready 
2023-08-07T07:38:43.179Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 15.716156ms, next run in 1m0s 
2023-08-07T07:40:20.608Z info HelmRelease/alertmanager.monitoring-system - chart diverged from template 
2023-08-07T07:40:20.615Z info HelmRelease/alertmanager.monitoring-system - HelmChart 'schiff-system/monitoring-system-alertmanager' is not ready 
2023-08-07T07:40:20.628Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 49.393093ms, next run in 1m0s 
2023-08-07T07:40:35.455Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-receivers' 
2023-08-07T07:40:35.457Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-route' 
2023-08-07T07:40:36.601Z debug HelmRelease/alertmanager.monitoring-system - preparing upgrade for alertmanager 
2023-08-07T07:40:38.010Z debug HelmRelease/alertmanager.monitoring-system - resetting values to the chart's original version 
2023-08-07T07:40:38.931Z debug HelmRelease/alertmanager.monitoring-system - performing update for alertmanager 
2023-08-07T07:40:39.140Z debug HelmRelease/alertmanager.monitoring-system - creating upgraded release for alertmanager 
2023-08-07T07:40:41.948Z debug HelmRelease/alertmanager.monitoring-system - waiting for release alertmanager resources (created: 0 updated: 6  deleted: 0) 
2023-08-07T07:40:42.370Z debug HelmRelease/alertmanager.monitoring-system - updating status for upgraded release for alertmanager 
2023-08-07T07:40:42.899Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 7.4495251s, next run in 5m0s 
2023-08-07T07:42:38.061Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-receivers' 
2023-08-07T07:42:38.076Z info HelmRelease/alertmanager.monitoring-system - could not find optional ConfigMap 'monitoring-system/alertmanager-tenant-route' 
2023-08-07T07:42:40.378Z info HelmRelease/alertmanager.monitoring-system - no diff in cluster resources compared to release 
2023-08-07T07:42:40.411Z info HelmRelease/alertmanager.monitoring-system - reconcilation finished in 2.355986212s, next run in 5m0s 

Code of Conduct

  • I agree to follow this project's Code of Conduct
@stefanprodan
Copy link
Member

stefanprodan commented Aug 8, 2023

@muryoutaisuu are by any chance the missing Kustomizations on an older API version or are they all in your Git repo on kustomize.toolkit.fluxcd.io/v1? Can you post here the YAML from Git for components-locations-bn-reftmdc-mdo-1.

@muryoutaisuu
Copy link
Author

@stefanprodan you are correct, in the gitrepo it's still on an older version:

> cat kustomizations/components-locations-bn-reftmdc-mdo-1_schiff-system.yaml 
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: components-locations-bn-reftmdc-mdo-1
  namespace: schiff-system
  labels:
    sharding.fluxcd.io/key: platform
spec:
  decryption:
    provider: sops
    secretRef:
      name: sops-age-schiff
  interval: 5m
  path: ./locations/bn/reftmdc/mdo-1
  prune: true
  sourceRef:
    kind: GitRepository
    name: components
  timeout: 2m

@stefanprodan
Copy link
Member

stefanprodan commented Aug 8, 2023

I'm surprise that this is the only thing not working, v1beta1 is a 3 years old API, please update all your manifests in Git to the APIs available in GA:

  • Kustomization kustomize.toolkit.fluxcd.io/v1
  • GitRepository source.toolkit.fluxcd.io/v1
  • HelmRepository source.toolkit.fluxcd.io/v1beta2
  • HelmRelease helm.toolkit.fluxcd.io/v2beta1
  • Receiver notification.toolkit.fluxcd.io/v1

You're also using sharding.fluxcd.io/key: platform, so you need to look at the kustomize-controller pod responsible for this shard. I suspect this is the case for helm-controller as well.

@stefanprodan
Copy link
Member

stefanprodan commented Aug 8, 2023

By the way, in the next release we're deprecating the gotk_reconcile_condition reported by Flux controllers, this metric will be reported by kube-state-metrics so no matter how many shards you're running, all Flux resources metrics will be tracked from a single point, the kube-state-metrics instance. Ref: #4128

@muryoutaisuu
Copy link
Author

Thanks for the advice!

updating API versions is on our agenda.

I just checked about the sharding of alertmanager HR:

~ | kube mdo-1.reftmdc.bn | 2023.08.08 11:24:25
> k port-forward -n flux-system pods/helm-controller-74fcd69796-fs2gc 8080:8080 1>/dev/null&
[1] 633763

~ | kube mdo-1.reftmdc.bn | 2023.08.08 11:24:41
> k port-forward -n flux-system pods/helm-controller-platform-79c945ffbb-zhq6s  8081:8080 1>/dev/null&
[2] 634655

~ | 2023.08.08 11:24:51
> curl -s localhost:8080/metrics | grep gotk_reconcile_condition | grep alertmanager
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Deleted",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="False",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="True",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Unknown",type="Ready"} 0

~ | 2023.08.08 11:25:00
> curl -s localhost:8081/metrics | grep gotk_reconcile_condition | grep alertmanager
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Deleted",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="False",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="True",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Unknown",type="Ready"} 0

~ | kube mdo-1.reftmdc.bn | 2023.08.08 11:26:13
> k -n monitoring-system get hr alertmanager -o yaml | grep shard
    sharding.fluxcd.io/key: platform

alertmanager is sharded to key platform but still appearing on both controllers. On the non-sharded controller it's appearing as notReady.

after restarting both controllers it's now appearing only on the sharded controller:

~ | 2023.08.08 11:40:19
> curl -s localhost:8080/metrics | grep gotk_reconcile_condition | grep alertmanager

~ | 2023.08.08 11:40:25
> curl -s localhost:8081/metrics | grep gotk_reconcile_condition | grep alertmanager
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Deleted",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="False",type="Ready"} 0
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="True",type="Ready"} 1
gotk_reconcile_condition{kind="HelmRelease",name="alertmanager",namespace="monitoring-system",status="Unknown",type="Ready"} 0

For this upgrade cycle we will therefore restart the non-sharded controllers to reset state and patiently await the kube-state-metrics reporting. Thanks for the heads-up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants