Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus cannot discover services, pods, etc. from Kubernetes user namespaces #1951

Closed
sudhi-vm opened this Issue Sep 6, 2016 · 10 comments

Comments

Projects
None yet
5 participants
@sudhi-vm
Copy link

sudhi-vm commented Sep 6, 2016

The current Kubernetes discovery module assumes the services, pods, etc. are always available on "/api/v1/services", "/api/v1/pods", etc. paths. Kubernetes supports having services, pods, etc. hosted in different namespaces. If prometheus server itself is deployed in a pod running within a namespace, then it has access only to resources in that namespace. Trying to access "/api/v1/pods" or "/api/v1/services" without specifying namespace results in "403 Forbidden" from the API server.

@sudhi-vm

This comment has been minimized.

Copy link
Author

sudhi-vm commented Sep 6, 2016

CC: @fabxc

@grobie

This comment has been minimized.

Copy link
Member

grobie commented Sep 6, 2016

The Prometheus Kubernetes discovery already supports Kubernetes namespaces. By default all namespaces get scraped, this can be changed by using relabeling and the __meta_kubernetes_service_namespace etc.

"/api/v1/services", "/api/v1/pods", etc. paths. This points to the default namespace in Kubernetes.

This is not true, these endpoints are namespace agnostic.

The right discovery API prefix should be namespaced, for e.g., "/api/v1/namespaces/default/services" or "/api/v1/namespaces/tenant1/services".

No, we want to be able to scrape all namespaces at once.

@sudhi-vm

This comment has been minimized.

Copy link
Author

sudhi-vm commented Sep 6, 2016

@grobie I agree that "/api/v1/services" are namespace agnostic. My comment about This points to the default namespace in Kubernetes. was incorrect.

Consider the use case where the prometheus server is deployed in a user namespace (which is what we are doing) in K8. In this case, the prometheus server has access only to resources in that namespace. With RBAC/ABAC enabled on k8, this would result in a "403 Forbidden" response from k8 API server during discovery.

One solution could be to use "/api/v1/namespaces/{{namespace}}/services" if namespace is provided in the config, otherwise use "/api/v1/services". I updated the issue description & commit accordingly.

@grobie

This comment has been minimized.

Copy link
Member

grobie commented Sep 6, 2016

If prometheus server itself is deployed in a pod running within a namespace, then it has access only to resources in that namespace.

That also depends on the respective authorization settings I think. We actually deploy our Prometheus server in a custom namespace and gave that special namespace the necessary rights (via a ABAC authorization policy file) to scrape all other namespaces.

I'd like to avoid introducing another configuration option if possible. Could you solve your problem by changing your authorization policy?

@sudhi-vm

This comment has been minimized.

Copy link
Author

sudhi-vm commented Sep 6, 2016

The specific use case is we have our application and prometheus deployed in a custom namespace in Kubernetes. We want the prometheus server to scrape metrics only from our application pods & services running within the custom namespace.

Having authorization policy to allow access to other namespaces does not seem right from security point of view. Another option that may avoid extra config param is to automatically read the namespace from /var/run/secrets/kubernetes.io/serviceaccount/namespace (http://kubernetes.io/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod) and this can be tied with the in_cluster configuration. What do you think ?

@jimmidyson

This comment has been minimized.

Copy link
Member

jimmidyson commented Sep 6, 2016

While I'm not keen on it personally, I can see the valid use case that we can't cover without a loose auth policy. If we were to do this then I would prefer to support optional multiple namespaces via config, although that increases the number of watches required to be set up (they are cheap I guess). We can retain backwards compatibility easily here too by keeping existing functionality if no namespaces are specified.

sudhi-vm added a commit to sudhi-vm/prometheus that referenced this issue Sep 12, 2016

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Sep 23, 2016

Sorry, I overlooked that issue. Seems like the Kubernetes SD will simply not work with auth policies setup, which is not really acceptable.

I'm not familiar with the authz behavior. Do we have a way to determine namespaces we have access to so that we could do this automatically behind the scenes?
I'd argue for leaving this issue open and investigate this after the main changes are done.

@nsams

This comment has been minimized.

Copy link

nsams commented Oct 27, 2016

In OpenShift there is the additional endpoint /oapi/v1/projects which will return namespaces. But that would require a openshift SD...

@grobie

This comment has been minimized.

Copy link
Member

grobie commented May 25, 2017

Implemented in #2642.

@grobie grobie closed this May 25, 2017

tedmiston added a commit to astronomer/helm.astronomer.io that referenced this issue Jul 23, 2018

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.