-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customize relabel configs #1166
Comments
I want to attach a |
I am running into the exact same issues. Based on my testing you are correct that metricRelabeling has no access to any label which hasn't already been built, i.e __meta. I wanted to update our instance to use the __meta_kubernetes_pod_node_name exactly how you referenced it. |
you can use the new |
@brancz, no, please, no! Ability to choose what labels to add to targets and what not to add is very essential! If you force any predefined algorithm without an ability to customize — you limit your users. And if the only way to override that behavior is to complete disable automatic and use |
You're looking to actually add arbitrary relabeling, which is not something we intend to add as it creates more confusion with users who are new to Prometheus. If that's the level of customization you need, then please use the additional config. Is there any technical reason why you must have those label names, or is it merely a preference? I don't see an actual benefit over the labelings that you get out of the box and I'd prefer to have a homogeneous landscape of label names and values as it provides a much better base in order to share alerts and dashboarding. |
I explained reasons in the first comment.
|
And, by the way, if we already have metrics_relabeling — why not add relabeling? |
I really hope that this issue got reopened and worked on. Like @distol has said, this is a very essential functionality that prometheus-operator user should be able to have |
@brancz why would this be confusing to users? I think that it is essential to be able to specify what labels to attach to the metrics without having to specify the whole configuration in |
That already exists as the |
oh I was not aware of that, will this append tags to the list or replace the original list? |
The |
@brancz Im trying to use the I get this back atm: node_memory_MemTotal{
endpoint="https",
instance="10.1.94.43:9100",
job="node-exporter",
namespace="monitoring",
pod="node-exporter-nk4pq",
service="node-exporter"
} and I would like to add to it something like I tried this: apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: monitoring
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 30s
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: k8s-app
selector:
matchLabels:
k8s-app: node-exporter
targetLabels:
- __meta_kubernetes_node_name and this: apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: monitoring
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 30s
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: k8s-app
selector:
matchLabels:
k8s-app: node-exporter
metricRelabelings:
sourceLabels: __meta_kubernetes_node_name
targetLabel: node But nothing works :( Any idea what is going wrong, or where I can check how to do it? Tkx in advance, and I apologise if this is not the best place to ask this but I can't find anywhere else... |
We're discussing that for any node specific targets this should just always be relabelled onto the target as it is a frequently requested feature. (Re: #1548 (comment)) Quick reminder though, that with the node-exporter you will never get this because that meta label does not exist for it, it's a normal daemonset with normal pods, which do not have that label during discovery. Only the kubelet has it with node discovery. In any other case you should do a join with kube-state-metrics metrics to get the Pod's node. |
@hgraca , you can have a look at how I managed to get the node labels added to my targets over at |
@brancz proposing to use the "additional config" for such an important feature is crazy. Why would i even use the operator if i have to replace all servicemonitors with my custom configs ? Is it really so hard to allow for extending the "relabel_configs" the same way that the "metric_relabel_configs" is done ? |
I totally agree with your concern. I can't understand why there is so much resistant to include support for __meta tags, role: node and customizing relabel_configs. If people are happy with the defaults, fine. But why not provide the flexibility to customize if needed? |
For your information #1879 just merged. Let me know if that solves the issues mentioned here. |
anyone used it yet ? #1879 |
For future Googler, this is my monitor yaml which works with apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: xxx
spec:
endpoints:
- port: http-metrics
interval: 30s
relabelings:
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: instance
namespaceSelector:
matchNames:
- xxx
selector:
matchLabels:
service: xxx |
with respect to node-exporter o we have any work around for now to get the node name label in the metrics without additionalScrape config ? |
My prometheus-knowledge is a bit rusty, so I might be misunderstanding something here about the many stages of relabelling and how things are pieced together. My motivation so far: I'm on a team running an internal Kubernets platform used for ~100 of services belonging to all sorts of formal and informal teams. To keep track, we've begun tagging a But for Kubernetes, we have hit a problem with prometheus and the prometheus-operator:
From my point-of-view, allowing us to do cluster-wide relabeling on scrape feels like a very obvious simplification for us and our users. I'm quite surprised it's being rejected on grounds of adding complexity!? Alternatives:
Anybody else got ideas? |
@msiebuhr you should look into ScrapeClass which allows to append global config to service/pod monitors. |
Prometheus Operator v0.17.0, Prometheus v2.2.1.
Problem
Prometheus Operator already does an awesome "automagic" with relabeling, but lack of ability to make any customization — is it good or bad?
applications
(redis, mongo, any other exporter, etc.), but notsystem components
(kubelet, node-exporter, kube-scheduler, ...,) — system components do not need most of the labels (endpoint, namespace, pod, and service). Why shall we have waste labels on all metrics from system components? It makes things much harder, when you are writing alerts (you shall drop excess labels to make alerts meaningful) and when you are writing queries (it's harder to find meaningful labels between waste ones).tier=production|system|development|...
to namespaces — we want to have that label attached to every metrics fromapplications
(it's easy with relabel_configs). Yes, it is possible to do complex "joins" withkube-state-metrics
, but we want to have simple and straight queries, that are easy to read and modify.__meta_kubernetes_pod_node_name
to all metrics fromkubelet
andnode-exporter
.__meta_*
labels at this phase (or am I missing something?), so we can't do a lot.Feature request
disableAutoLabels: []
inServiceMonitor/Endpoint
?)The text was updated successfully, but these errors were encountered: