Closed
Description
I found during Postgres Operator startup that the metrics server is binded to 8080 port but unfortunately it is not exposed by k8s Service.
Actual log line:
time="2021-11-19T12:44:10Z" level=info msg="metrics server is starting to listen" addr=":8080" file="sigs.k8s.io/controller-runtime@v0.8.3/pkg/log/deleg.go:130" func="log.(*DelegatingLogger).Info" version=5.0.3-0
Prometheus metrics are available at :8080/metrics URL.
Activity
Expose Prometheus Metrics Server with k8s Service
Expose Prometheus Metrics Server with k8s Service
Expose Prometheus Metrics Server with k8s Service
Expose Prometheus Metrics Server with k8s Service
jkatz commentedon Dec 20, 2021
@alex1989hu These are controller metrics -- is there anything specific you are looking to collect?
alex1989hu commentedon Dec 21, 2021
For now, there are no specific metrics but it is enabled in PGO. Did not check PGO source code whether we set additional metrics or not.
Propose to expose it (see PR) or disable metrics endpoint in PGO
Expose Prometheus Metrics Server with k8s Service
benjaminjb commentedon Oct 18, 2022
Hey Alex, just wanted to let you know that I've added this (and the related PR) to our backlog to make a decision on whether we want to expose those metrics or disable them by default.
genofire commentedon Feb 17, 2023
you should also add Manifests (Pod or ServiceMonitor) for Prometheus-Operator (like in #94 ) for the Operator to scrape that metrics.
And maybe PrometheusRules with a default set of AlertingRules.
jcpunk commentedon Apr 24, 2023
I'll confess I'd love to see this in the
helm
andkustomize
examples...benjaminjb commentedon May 2, 2025
Hello, just to let you know, the most recent release includes
There's no service, but then there's no service for our other metrics: we just use Prometheus scrape jobs.
It seems to me that the situation fits the request. Given that, I'm going to close this issue, BUT if you think there's more to talk about, please reopen this ticket and let's talk about it.