Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upKubernetes 1.7.0 requires cAdvisor changes #2916
Comments
This comment has been minimized.
This comment has been minimized.
|
I'd vote to start splitting the kubernetes example file by version as I know many companies who are (several) minor versions behind. |
This comment has been minimized.
This comment has been minimized.
|
Okay, the cAdvisor endpoint is
Should I open a PR, or would that be pointless if this will be rewritten anyway? |
This comment has been minimized.
This comment has been minimized.
|
@unixwitch I think @grobie's proposal makes sense to split this up into different configs for different Kubernetes versions (we can start with 1.6 vs. 1.7), but otherwise I think a PR sounds great. |
unixwitch
referenced this issue
Jul 7, 2017
Merged
documentation: update Kubernetes example for 1.7 #2918
This comment has been minimized.
This comment has been minimized.
|
PR at #2918 |
brian-brazil
added
the
priority/P2
label
Jul 14, 2017
user9384732902
referenced this issue
Jul 19, 2017
Closed
No `kubernetes-pod` `kubernetes-service` target in prometheus. #2956
This comment has been minimized.
This comment has been minimized.
allistera
commented
Jul 26, 2017
|
Does this require a change to the ClusterRole when rbac is enabled? Changing to use My ClusterRole is the same one from
|
This comment has been minimized.
This comment has been minimized.
|
I've just fixed the same issue by moving the
This makes the It might make sense to leave the kubernetes-nodes job as is without the cadvisor port and add the following separate job to scrape the cadvisor metrics:
|
This comment has been minimized.
This comment has been minimized.
|
Having both does lead to duplicate timeline series for kubernetes < 1.7 so make sure to either leave one of them out of your config or include the job in your alerts, rules and graphs. There's interesting info coming from the node itself though, so you probably want to keep scraping those as well from 1.7 onwards. |
This comment has been minimized.
This comment has been minimized.
This is what the merged PR does, although it doesn't use the cadvisor port since in 1.7.3+ there is a new endpoint for this on the Kubelet metrics port (/metrics/cadvisor). See #2918 for the details. |
This comment has been minimized.
This comment has been minimized.
|
The separate endpoint on the 4194 port has and will continue to exist though, so I think it's the common denominator in all setups, therefore the current canonical endpoint. |
This comment has been minimized.
This comment has been minimized.
|
I don't think the example configuration should use the cadvisor (port 4191) endpoint. For one thing kubeadm now disables it by default (in 1.7+, I think), so it won't work in any cluster created by kubeadm. We also disable it in our own clusters as we prefer the encrypted and authenticated Kubelet endpoint. By contrast the Kubelet endpoint (/metrics/cadvisor) is available in all clusters by default. |
This comment has been minimized.
This comment has been minimized.
This is only true for 1.7.3+ clusters. But I understand your point. As this is just an example, I'd suggest we move this example to use the |
This comment has been minimized.
This comment has been minimized.
|
My understanding is that 1.7.0…2 are just broken for Prometheus metrics. I
don't think we need to support them. We should default to the new endpoint
on the authenticated port. It seems most "production grade" cluster
distributions disable the unauthenticated port (at least GKE and kubeadm?)
…On Tue, Aug 1, 2017, 16:20 Frederic Branczyk ***@***.***> wrote:
By contrast the Kubelet endpoint (/metrics/cadvisor) is available in all
clusters by default.
This is only true for 1.7.3+ clusters.
But I understand your point. As this is just an example, I'd suggest we
move this example to use the /metrics/cadvisor endpoint once it's been
established enough (I'd say towards the 1.8 release).
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2916 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAICBst_z0-L6Qy_aUXxxNkwT4n4LJEMks5sTzQUgaJpZM4OQyDw>
.
|
This comment has been minimized.
This comment has been minimized.
|
1.7.3 is out now. the PR for this issue (#2918) is already merged, and the example configuration is very explicit about the different needs for different versions. I think we can close this issue. |
This comment has been minimized.
This comment has been minimized.
|
Agreed @matthiasr. Thanks everyone! |
brancz
closed this
Aug 3, 2017
JoergM
referenced this issue
Aug 9, 2017
Closed
Prometheus scraping of cAdvisor values does not work with Kubernetes 1.7 #1655
This was referenced Sep 14, 2017
lorenzo-biava
referenced this issue
Sep 21, 2017
Closed
[kube-prometheus] cAdvisor metrics are unavailable with Kubeadm default deploy at v1.7.3+ #633
dlebrero
added a commit
to akvo/akvo-platform
that referenced
this issue
Dec 28, 2017
dlebrero
referenced this issue
Dec 28, 2017
Merged
Add cAdvisor scraping on top of the default ones #48
MarcoGlauser
referenced this issue
Apr 18, 2018
Closed
Useful labels missing from Prometheus node_exporter in 1.7.x #52858
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
unixwitch commentedJul 7, 2017
In Kubernetes 1.7.0, Kubelet no longer reports cAdvisor metrics on its /metrics endpoint; you have to scrape cAdvisor instead, on port 4194. This is apparently intentional, and means the provided kubernetes_sd_configs example doesn't work any more. (Well, it works, but it doesn't return container metrics, which makes it rather less useful than it should be.)
We observed this with an old config that looks like this:
Because kubernetes_sd_configs picks up the Kubelet port from the Node definition in apiserver, which is 10250 for us, it doesn't scrape port 4194 and therefore doesn't get the cAdvisor metrics.
We fixed this by adding a separate job to scrape port 4194:
However, looking at the current version of the example, it's now scraping Kubelet using the apiserver proxy. I assume this was changed to allow scraping nodes without requiring direct access to internal IP addresses. Unfortunately I don't know how to fix this to scrape cAdvisor as well, or if that's even possible. Obvious endpoints like
/proxy/cadvisordon't work and documentation on the proxy endpoints seems hard to find.In any case, the example config should probably be updated to scrape the new endpoint somehow.
(ref: kubernetes/kubernetes#48483)