Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upKubernetes SD: Drop __meta_kubernetes_{pod,container}_name labels #1833
Comments
This comment has been minimized.
This comment has been minimized.
|
In the spirit of 1.0, I don't think we should be dropping anything like this until 2.0. |
This comment has been minimized.
This comment has been minimized.
|
Isn't this still marked as experimental? Even so, I kinda agree it should remain now I think about it. Closing... |
jimmidyson
closed this
Jul 20, 2016
This comment has been minimized.
This comment has been minimized.
barkerd427
commented
Jul 20, 2016
|
Jimmi, could you explain why you think it should stay now? On Jul 20, 2016 06:33, "Jimmi Dyson" notifications@github.com wrote:
|
This comment has been minimized.
This comment has been minimized.
|
Thinking about it it seems to make more sense to use the same name in discovery as would be retrieved via scraping. Another option is that they could be dropped in metric relabelling (post scrape, pre ingestion) phase I guess? I'm erring towards renaming the discovered labels to match the scraped labels, but will defer to the project maintainers for what compatibility guarantees they want around this. |
This comment has been minimized.
This comment has been minimized.
|
Just as a side thought: Duplicated labels do not create more time series and have negligible storage overhead, only indexing becomes a bit more expensive. |
This comment has been minimized.
This comment has been minimized.
|
@beorn7 That is interesting - when I read:
I assumed that reducing label duplication would be a good thing, but from what you've said it doesn't matter too much? |
This comment has been minimized.
This comment has been minimized.
Performance wise it doesn't matter much, however it's good practice that each label should help with distinguishing a time series. This really sounds like a Cadvisor issue, as Prometheus can't do anything about arbitrary instrumentation labels. |
This comment has been minimized.
This comment has been minimized.
|
cadvisor just exposes all Docker labels that are applied to the containers. Kubernetes applies the |
This comment has been minimized.
This comment has been minimized.
That must be coming from your configuration of Prometheus, as Prometheus does no mappings out of the box and I don't see it in the example either. |
This comment has been minimized.
This comment has been minimized.
Yeah, the wording above is not entirely correct. It should read “... every unique combination of key-value label pairs …”. Label duplication doesn't increase the cardinality of unique label pairs. Note that this is tangential to the semantics discussed in this issue. As Brian said, the fact that the storage deals well with it should not keep us from sane semantics. |
This comment has been minimized.
This comment has been minimized.
|
@brian-brazil Doh! Yes that would be it... Sorry for the noise... |
This comment has been minimized.
This comment has been minimized.
|
Actually this is a problem in Kubernetes - a relic from before it set Docker labels properly when creating containers. Fixing there. |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
jimmidyson commentedJul 20, 2016
Currently Prometheus pod discovery adds the
__meta_kubernetes_pod_namelabels but this is also inherited via Docker labels that Kuberbetes sets asio.kubernetes.pod.name(available as sanitized Prometheus labelio_kubernetes_pod_name) exposed directly from cadvisor.This is the same for pod namespace & container name (
io_kubernetes_pod_namespace&io_kubernetes_container_namerespectively.Seems reasonable to not duplicate labels to reduce data storage, in this case there are at least 3 labels that are duplicated for all pods.
This has been available from 1.2.0 pre-releases, GA release was 2016-03-16. Guess we need to pick a support policy for Kubernetes SD as well?
Any thoughts on this @pdbogen?