Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FeatureRequest] Kubernetes - add API Master meta label for multi cluster setups #2664

Closed
boeboe opened this Issue Apr 28, 2017 · 10 comments

Comments

Projects
None yet
3 participants
@boeboe
Copy link

boeboe commented Apr 28, 2017

Hi all,

In a multi cluster setup, it currently looks impossible (unless you use hard coded external_label with cluster specific meta-data) to determine the source cluster from which the metrics are originated. Would it be possible to add a label to add this kind of information?

# kubectl cluster-info
Kubernetes master is running at https://vm-48-124.eng.lab.tlv.redhat.com:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Having an extra label that adds the Kubernetes Master URL would be sufficient to distinguish logs in a multi cluster environment.

Best regards,
Bart

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Apr 28, 2017

Are you looking to add a label to everything that comes from a given scrape_config? If so, relabelling already allows for this.

@boeboe

This comment has been minimized.

Copy link
Author

boeboe commented Apr 28, 2017

Hi @brian-brazil ,

labeling / relabeling / external label ... All the same for me. I want to be able to fetch the information (like the pod name is added automatically) without having to hard code this values in the Prometheus config file. I searched in the sources how Prometheus node-exporter is fetching this info but no luck so far... Any suggestions?

BR,
Bart

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Apr 28, 2017

Does a kubernetes cluster have a notion of its own name? Or is it something that's just operational practice on your side?

Given that you have to have a scrape config per cluster anyway, relabelling is the way to go here I suspect.

@brancz

This comment has been minimized.

Copy link
Member

brancz commented May 2, 2017

@boeboe can you describe further what you are doing? Are you looking at multiple clusters with the same Grafana instance and are trying to distinguish? Are you trying to use Prometheus Federation and fan in from multiple Prometheus instances?

Does a kubernetes cluster have a notion of its own name?

As far as I'm aware it does not, the only reason you are getting a URL there @boeboe is because kubectl is printing the DNS name it itself is using to access the clusters apiserver, which in turn likely gets us to @brian-brazil's conclusion (if we have the full picture):

Given that you have to have a scrape config per cluster anyway, relabelling is the way to go here I suspect.

@boeboe

This comment has been minimized.

Copy link
Author

boeboe commented May 2, 2017

Hi @brancz and @brian-brazil ,

Thanks for your responses so far!

We have a HA cluster setup with 2 clusters in different private data centers. Within the K8S cluster itself, we use one Prometheus instance, which acts as a mere proxy to another Prometheus instance outside of the cluster (due to permanent storage reasons). The Grafana visualisations origin from this second out-of-k8s-cluster instance.

What I am trying to achieve is to have a single deployment yaml file for the Prometheus proxy deployments in both clusters. I am looking for a way to dynamically enrich the metrics (from the node-exporters deployed as a K8S daemon set) with the cluster Master URL (the only thing that makes metrics unique cross-cluster). What I do now is to use an external_labels named cluster in the global configuration section of the Prometheus proxy to add this extra information statically (resulting in separate yaml deployment files cross cluster).

We had a similar "issue" with regards to EFK (ElasticSearch, Fluentd and Kibana), but I have found a solution and created a pull request for this on the following Github Repo's:

I tried to find a way to do something similar here, but so far failed to find an elegant solution...

Best regards,
Bart

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented May 2, 2017

What I do now is to use an external_labels named cluster in the global configuration section of the Prometheus proxy to add this extra information statically

That's the recommended way of handling that. It's expected that each Prometheus server you're running will have different external_labels.

@boeboe

This comment has been minimized.

Copy link
Author

boeboe commented May 2, 2017

Hi @brian-brazil

Ok I see your point. I was wondering if there is any way to execute arbitrary shell commands or execute some interpreted language within the configuration files of Prometheus?

An example... in fluentd configuration files, one is able to do the following:

<filter **>
  @type record_transformer
  <record>
	cluster "#{%x[ grep search /etc/resolv.conf | awk '{print $NF}' ]}"
  </record>
</filter>

This is a way to dynamically add cluster information (I'm no big fan of hard coded configuration when not needed, as you might suspect) being the cluster API server DNS search name, which is a temporary work around before the feature as implemented in the pull request is released as an official Ruby Gem. So my question now to end up, is whether there is a way to have things executed/interpreted at run/start-up time in the Prometheus configuration file?

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented May 2, 2017

Ok I see your point. I was wondering if there is any way to execute arbitrary shell commands or execute some interpreted language within the configuration files of Prometheus?

No, and that's explicitly out of scope. That's the role of your configuration management system.

@boeboe

This comment has been minimized.

Copy link
Author

boeboe commented May 2, 2017

Ok,

Actually a very good point there. Thanks for the feedback and patience.

BR,
Bart

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.