Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign up[FeatureRequest] Kubernetes - add API Master meta label for multi cluster setups #2664
Comments
This comment has been minimized.
This comment has been minimized.
|
Are you looking to add a label to everything that comes from a given scrape_config? If so, relabelling already allows for this. |
This comment has been minimized.
This comment has been minimized.
|
Hi @brian-brazil , labeling / relabeling / external label ... All the same for me. I want to be able to fetch the information (like the pod name is added automatically) without having to hard code this values in the Prometheus config file. I searched in the sources how Prometheus node-exporter is fetching this info but no luck so far... Any suggestions? BR, |
This comment has been minimized.
This comment has been minimized.
|
Does a kubernetes cluster have a notion of its own name? Or is it something that's just operational practice on your side? Given that you have to have a scrape config per cluster anyway, relabelling is the way to go here I suspect. |
This comment has been minimized.
This comment has been minimized.
|
@boeboe can you describe further what you are doing? Are you looking at multiple clusters with the same Grafana instance and are trying to distinguish? Are you trying to use Prometheus Federation and fan in from multiple Prometheus instances?
As far as I'm aware it does not, the only reason you are getting a URL there @boeboe is because
|
This comment has been minimized.
This comment has been minimized.
|
Hi @brancz and @brian-brazil , Thanks for your responses so far! We have a HA cluster setup with 2 clusters in different private data centers. Within the K8S cluster itself, we use one Prometheus instance, which acts as a mere proxy to another Prometheus instance outside of the cluster (due to permanent storage reasons). The Grafana visualisations origin from this second out-of-k8s-cluster instance. What I am trying to achieve is to have a single deployment yaml file for the Prometheus proxy deployments in both clusters. I am looking for a way to dynamically enrich the metrics (from the node-exporters deployed as a K8S daemon set) with the cluster Master URL (the only thing that makes metrics unique cross-cluster). What I do now is to use an external_labels named cluster in the global configuration section of the Prometheus proxy to add this extra information statically (resulting in separate yaml deployment files cross cluster). We had a similar "issue" with regards to EFK (ElasticSearch, Fluentd and Kibana), but I have found a solution and created a pull request for this on the following Github Repo's: I tried to find a way to do something similar here, but so far failed to find an elegant solution... Best regards, |
This comment has been minimized.
This comment has been minimized.
That's the recommended way of handling that. It's expected that each Prometheus server you're running will have different external_labels. |
This comment has been minimized.
This comment has been minimized.
|
Ok I see your point. I was wondering if there is any way to execute arbitrary shell commands or execute some interpreted language within the configuration files of Prometheus? An example... in fluentd configuration files, one is able to do the following:
This is a way to dynamically add cluster information (I'm no big fan of hard coded configuration when not needed, as you might suspect) being the cluster API server DNS search name, which is a temporary work around before the feature as implemented in the pull request is released as an official Ruby Gem. So my question now to end up, is whether there is a way to have things executed/interpreted at run/start-up time in the Prometheus configuration file? |
This comment has been minimized.
This comment has been minimized.
No, and that's explicitly out of scope. That's the role of your configuration management system. |
This comment has been minimized.
This comment has been minimized.
|
Ok, Actually a very good point there. Thanks for the feedback and patience. BR, |
boeboe
closed this
May 2, 2017
boeboe
referenced this issue
Dec 24, 2017
Closed
Support for environment variable substitution in configuration file #2357
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
boeboe commentedApr 28, 2017
Hi all,
In a multi cluster setup, it currently looks impossible (unless you use hard coded external_label with cluster specific meta-data) to determine the source cluster from which the metrics are originated. Would it be possible to add a label to add this kind of information?
Having an extra label that adds the Kubernetes Master URL would be sufficient to distinguish logs in a multi cluster environment.
Best regards,
Bart