New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes SD: Optionally attach namespace and node metadata to targets #9510
Comments
|
The obvious workaround in my second use case is to design some kind of admission controller which will force a specific set of labels/annotations on pods based on the namespace. I think it's a reasonable workaround, but I believe that more meta labels based on the location of an object can be used in enough interesting ways to make this feature justifiable. |
|
An example of the labels exposed: |
Thanks, good idea to include examples. There's probably a lot of ways this could be configured but I imagine we would add something similar to the following for kubernetes_sd_config: attach_metadata:
# Attaches node metadata to discovered targets. Only valid for role: pod, endpoints.
# When set to true, Prometheus must have permissions to get Nodes.
[ node: <boolean> | default = false ]
# Attaches namespace metadata to discovered targets. Invalid for role: node.
# When set to true, Prometheus must have permissions to get Namespaces.
[ namespace: <boolean> | default = false ] |
|
I think this makes sense. The only thing that might scare me that we should think through is: can we potentially expose information to users to exploit? It has happened in the past in kube-state-metrics for example that we accidentally exposed all annotations by default of secrets, which leaked secret content of the kubectl last-applied into an annotation. Just want to make sure we think this through, otherwise I can think of lots of useful use cases for this. |
|
Security considerations are a great point. Off the top of my head, I don't know if there's any sensitive information that lives inside the metadata for nodes and namespaces, but I agree with being cautious before accepting the proposal. Generally speaking, what is Prometheus' approach to security concerns when adding new features? Does the potential to exploit something (even if that something is disabled by default) generally lead to not adding the new functionality? |
|
We already have node labels because we have a node role Available meta labels:
|
I have made a PuppetDB service discovery and I have added: |
Just looking at my nodes (checked one in each of GKE, EKS, and AKS), and the labels and annotations are all non-sensitive. The closest thing I could construe as sensitive is network metadata (CIDR ranges), internal names (resource names can include things like company and customer references), but all things that live within the standard realm of metadata you would associate with environment telemetry for the use in queries. I'm interested in what happened with kube-state-metrics here, it seems like labels and annotations are strongly considered by kubernetes to be non-sensitive identifying information. For a bit of context on my use case (from the grafana-agent issue), we have clusters split up into pools, and we're interested in running queries on resource utilization metrics grouped by pool (currently not possible for any metrics gathered by something like kubernetes_sd pod, svc, or endpoint role). I think the proposal here makes a lot of sense as a generic approach so users can quickly enrich their targets and handle the relabeling to get what they actually want as normal. It would definitely fulfill the needs of my use case. |
I imagine we'd want something similar for namespaces, though there's a small wart here:
|
I honestly think those are nice for a cluster admin, but don't facilitate k8s-hosted service developer as his pods and services are usually discovered with endpoint discovery. Another use-case we have: |
For me this would be a great help to have this capability for namespaces. |
…s and annotations to discovered pod targets in the same way as Prometheus 2.35 does See prometheus/prometheus#9510 and prometheus/prometheus#10080
…s and annotations to discovered pod targets in the same way as Prometheus 2.35 does See prometheus/prometheus#9510 and prometheus/prometheus#10080
|
It looks like PR #10080 only applies to targets discovered through the |
Proposal
Kubernetes SD should be extended so the service, pod, endpoints, and ingress roles can optionally attach extra meta labels based on the namespace and node objects that the targets are discovered in (where appropriate; ingress wouldn't support node meta labels, etc.).
This could be used within Prometheus, but also within Grafana Agent (for traces) and Promtail (for logs) to use this new meta information in similar ways.
Use case. Why is this important?
There are a few use cases for where this would be useful. One such use case has been documented in grafana/agent#980.
Another use case is for filtering targets to scrape to those that exist within specific namespaces. In this scenario, managed namespaces are dynamically created on behalf of users where they have full control over the resources deployed in the environment. The namespace, which users do not have permissions to modify, would have labels or annotations that determine whether scraping is available. This makes checking labels/annotations on the pods inappropriate as users have full control over their contents.
The text was updated successfully, but these errors were encountered: