Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServiceMonitor manifests are not getting created #1127

Closed
gouravsw opened this issue Dec 28, 2021 · 7 comments · Fixed by #1525
Closed

ServiceMonitor manifests are not getting created #1127

gouravsw opened this issue Dec 28, 2021 · 7 comments · Fixed by #1525

Comments

@gouravsw
Copy link

gouravsw commented Dec 28, 2021

I am able to deploy the Harbor in K8S cluster. Harbor is up and running without any issues. I am using the latest release

https://github.com/goharbor/harbor-helm/releases/tag/v1.8.1

Harbor OSS version: v2.4.1

For monitoring purpose via Prometheus, I am planning to enable the metrics and it's subfield servicemonitor to true. So that servicemonitor manifests will be created. But somehow these ServiceMonitor manifests are not getting created. While checking the templates for metrcis. It is passing all the checks

{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }}

As per expression metrics.enabled is true as well as metrics.serviceMonitor.enabled is true. And this check (.Capabilities.APIVersions.Has "monitoring.coreos.com/v1") has the requested api-version

k api-versions  |grep coreos
monitoring.coreos.com/v1
monitoring.coreos.com/v1alpha1

Could someone provide the insight why the ServiceMonitor manifests are not generating? What mistake I am making in values.yaml file?

metrics:
  enabled: true
  core:
    path: /metrics
    port: 8001
  registry:
    path: /metrics
    port: 8001
  jobservice:
    path: /metrics
    port: 8001
  exporter:
    path: /metrics
    port: 8001
  serviceMonitor:
    enabled: true
    additionalLabels: {}
    interval: "15"
    metricRelabelings: 
      # - action: keep
      #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
      #   sourceLabels: [__name__]
    relabelings: 
      # - sourceLabels: [__meta_kubernetes_pod_node_name]
      #   separator: ;
      #   regex: ^(.*)$
      #   targetLabel: nodename
      #   replacement: $1
      #   action: replace
@zyyw
Copy link
Collaborator

zyyw commented Jan 6, 2022

As this comment mentioned

https://github.com/goharbor/harbor-helm/blob/a57ee9e7672e03f3c605661109c5291c3819b511/values.yaml#L835

you should probably install prometheus-operator first.

@gouravsw
Copy link
Author

@zyyw Prometheus operator was deployed prior to harbor.

@timbrown5
Copy link

I'm seeing this behaviour too:

kubectl api-versions | grep monitoring.coreos.com/v1
monitoring.coreos.com/v1
monitoring.coreos.com/v1alpha1

values.yaml:

metrics:
  enabled: true
  serviceMonitor:
    enabled: true

In my case I am using customize to deploy the helm charts:

helmCharts:
  - name: harbor
    version: 1.9.2
    repo: https://helm.goharbor.io
    releaseName: deployment-harbor
    namespace: harbor
    valuesFile: values.yaml
helmCharts:
  - name: kube-prometheus-stack              
    version: 36.6.2
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: deployment-monitoring
    namespace: monitoring
    includeCRDs: false # Run install-crds.sh instead 
    valuesFile: values.yaml

I deployed the Prometheus helm chart after harbor, but the template should be yaml should get regenerated and reapplied each time and I get no Harbor ServiceMonitor listed in the latest yaml that kustomize generates.

@timbrown5
Copy link

timbrown5 commented Jul 13, 2022

Ok, in my case I think it's that kustomize isn't telling Harbor that that API exists. I will store a local copy and update the chart to remove that check (storing a local copy and manually rebasing seems to be a best practice anyway).

@divick
Copy link

divick commented Nov 24, 2022

I too am facing same issue with kustomize. Not sure if creating ServiceMonitor separately is a good idea.

@NyCodeGHG
Copy link

when kustomize executes helm it probably does not get the capabilites correctly. maybe it's better to remove that check like most helm charts do?

@sudermanjr
Copy link
Contributor

Any use of helm template will also have this issue. I think removing the Capabilities check entirely would be best.

ywk253100 added a commit that referenced this issue Jun 29, 2023
…itor-capabilities

Fix #1127 - remove capabilities check for prometheus
rgarcia89 pushed a commit to rgarcia89/harbor-helm that referenced this issue Jul 12, 2023
Signed-off-by: Andy Suderman <andy@suderman.dev>
Signed-off-by: Raul Garcia Sanchez <info@raulgarcia.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants