Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data missing in grafana dashboard #50

Closed
rajeshkothinti opened this issue Sep 11, 2022 · 5 comments · Fixed by #168
Closed

data missing in grafana dashboard #50

rajeshkothinti opened this issue Sep 11, 2022 · 5 comments · Fixed by #168
Assignees

Comments

@rajeshkothinti
Copy link

hello,

I followed chart installation as per instructions keeping all default values provided but could not get data in dashboard and tried with modifying values.yaml for data to get scrape to grafana

tried with below content as well but no luck.

chart installation was done right but missing something to get data in dashboard. I was able to get kubernetes cluster data in another dashboard to monitor like cluster memory and cpu.

Please advise

helm upgrade x509-certificate-exporter enix/x509-certificate-exporter --values myvalues.yaml

hostPathsExporter:
daemonSets:
nodes:
watchFiles:
- /var/lib/kubelet/pki/kubelet-client-current.pem
- /etc/kubernetes/pki/apiserver.crt
- /etc/kubernetes/pki/apiserver-etcd-client.crt
- /etc/kubernetes/pki/apiserver-kubelet-client.crt
- /etc/kubernetes/pki/ca.crt
- /etc/kubernetes/pki/front-proxy-ca.crt
- /etc/kubernetes/pki/front-proxy-client.crt
- /etc/kubernetes/pki/etcd/ca.crt
- /etc/kubernetes/pki/etcd/healthcheck-client.crt
- /etc/kubernetes/pki/etcd/peer.crt
- /etc/kubernetes/pki/etcd/server.crt
watchKubeconfFiles:
- /etc/kubernetes/admin.conf

  Thanks
@npdgm npdgm self-assigned this Sep 12, 2022
@npdgm npdgm added the usage label Sep 12, 2022
@npdgm
Copy link
Member

npdgm commented Sep 12, 2022

Hi @kRajr

Given these values you must be a Prometheus operator user. It is likely the operator did not select the ServiceMonitor object installed by our Helm chart.

Troubleshooting could go this way:

  • Get access to the Prometheus web UI (kubectl port-forward, if not exposed in your cluster)
  • Go to Status/Targets and look for "x509"
  • If the x509-certificate-exporter is showing up "DOWN", then it could be a NetworkPolicy issue. You may have set-up network isolation of namespaces.
  • Or if it's missing then it has to do with the Prometheus operator
    • Check logs of the operator when you helm install the exporter (uninstall it first). It may explain why the ServiceMonitor is not accepted.
    • Inspect the YAML definition for your Prometheus custom resource. It must have serviceMonitorNamespaceSelector or serviceMonitorSelector parameters that prevent finding the ServiceMonitor installed by x509-certificate-exporter

I hope this helps a bit. Otherwise please provide a YAML export of your Prometheus object.

@rajeshkothinti
Copy link
Author

Hi @npdgm

Thanks for troubleshooting steps got some insight. I redeployed chart with default values. I am using prometheus operator. Prometheus operator was deployed using Kube-prometheus helm chart. In prometheus CRD I see labels added only for servicemonitorselector as below

serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: kube-prometheus-stack

I added label release: kube-prometheus-stack in x509 servicemonitor selector filed to match for prometheus operator but did not help, nothing on prometheus operator pod logs as well. I redeployed our x509 certificate with default values.yml and added below label in x509 service monitor manually before deploying chart. no logs in tail with x509 in prometheus operator

selector:
matchLabels:
{{- include "x509-certificate-exporter.selectorLabels" . | nindent 6 }}
release: kibe-prometheus-stack

much appreciated

@achetronic
Copy link

Hello mates. I am using this chart with the PodMonitor because of the same issue. It's related to the fact the Service in the chart is created as a headless, so ServiceMonitor is not able to scrape the service even when the selectors are well crafted. I tested this changes and worked perfectly but no time to open the issue fixing, so atm using PodMonitor (don't like it a lot, honestly, so I will try to open a PR ASAP)

@npdgm
Copy link
Member

npdgm commented Jul 12, 2023

@achetronic thank you for taking the time to report issues. We really appreciate it.

I agree with your position on PodMonitors. By default a ServiceMonitor will offer greater compatibility with older prometheus-operator versions and is quite the standard.

The associated Service was made headless on purpose, as there is no point having an internal load-balancer in front of multiple exporters. Prometheus-operator does query Endpoints and should not need kube-proxy or a CNI to provision unneeded network configurations.
I wonder why you and possibly other users may have an issue with the headless service. We've been deploying this chart on many Kubernetes distributions, CNI, and cloud providers, and never encountered this situation. If you can spare a little time, could you tell us about your environment and any detail you think may differ from most common clusters?
For example:

  • k8s distribution, version, and cloud provider used if it's managed
  • how was prometheus-operator deployed, it's version, and was the kube-prometheus-stack chart used?
  • usage of NetworkPolicies or not
  • CNI used, and is kube-proxy employed

Anyhow I went to investigate a few charts from prometheus-community and they don't seem to use headless services. Even though I'm pretty sure other exporters do use headless Services, let's follow the same practice as prometheus-community.
I will open a PR and make the Service a regular ClusterIP by default and move the headless option as a value flag. You can expect a release fairly soon as we have a few CI and build changes in the pipe.

@monkeynator
Copy link
Member

🎉 This issue has been resolved in version 3.8.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants