New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container_fs_* stats always 0 for all containers created by k8s #55397

Closed
discordianfish opened this Issue Nov 9, 2017 · 9 comments

Comments

Projects
None yet
6 participants
@discordianfish
Contributor

discordianfish commented Nov 9, 2017

/kind bug

What happened:
All cadvisor container_fs_* stats are 0, except a few without pod/image labels which I assume are for the root cgroup:


container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="pool-small",container_name="prometheus-to-sd-exporter",device="/dev/sda1",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/kubepods/burstable/pode93369ea-c40b-11e7-bff5-42010af0018b/3f76690535157c729524b48dbaab3f1788d680ad5d859768dc06c05e9ccd5610",image="asia.gcr.io/google-containers/prometheus-to-sd@sha256:c6aaa681e77e55aa7f7017ca55265accde313f8e2e5484ee1d0a4d89ff741c48",instance="gke-latency-at-pool-small-3338bda2-3c8g",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-pool-small-3338bda2-3c8g",name="k8s_prometheus-to-sd-exporter_fluentd-gcp-v2.0.9-h7tl7_kube-system_e93369ea-c40b-11e7-bff5-42010af0018b_0",namespace="kube-system",pod_name="fluentd-gcp-v2.0.9-h7tl7"} | 0
container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="pool-small",container_name="prometheus-to-sd-exporter",device="/dev/sda1",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/kubepods/burstable/podfc87c089-c3dd-11e7-bff5-42010af0018b/9f3cdcf5306a7a42fd88f8bbe690a15d964010eebcd3cd681d40f785a3dd8107",image="asia.gcr.io/google-containers/prometheus-to-sd@sha256:c6aaa681e77e55aa7f7017ca55265accde313f8e2e5484ee1d0a4d89ff741c48",instance="gke-latency-at-pool-small-3338bda2-2cg2",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-pool-small-3338bda2-2cg2",name="k8s_prometheus-to-sd-exporter_fluentd-gcp-v2.0.9-jb7ll_kube-system_fc87c089-c3dd-11e7-bff5-42010af0018b_0",namespace="kube-system",pod_name="fluentd-gcp-v2.0.9-jb7ll"} | 0
container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="pool-small",container_name="sidecar",device="/dev/sda1",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/kubepods/burstable/poddee71937-c472-11e7-bff5-42010af0018b/0584c59990c2bd3efe011f4bf5d755659567b3abb2c1a42c7ef49c7b11a69a3a",image="asia.gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:9aab42bf6a2a068b797fe7d91a5d8d915b10dbbc3d6f2b10492848debfba6044",instance="gke-latency-at-pool-small-3338bda2-3c8g",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-pool-small-3338bda2-3c8g",name="k8s_sidecar_kube-dns-4031738344-k8dfb_kube-system_dee71937-c472-11e7-bff5-42010af0018b_0",namespace="kube-system",pod_name="kube-dns-4031738344-k8dfb"} | 0
container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="pool-small",container_name="tiller",device="/dev/sda1",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/kubepods/besteffort/pod13ca351f-c484-11e7-bff5-42010af0018b/1315c8d1d2ada471cd0d4636f21766499feb5c66b558425adbf0cf19ec416865",image="gcr.io/kubernetes-helm/tiller@sha256:82677f561f8dd67b6095fe7b9646e6913ee99e1d6fdf86705adbf99a69a7d744",instance="gke-latency-at-pool-small-3338bda2-3c8g",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-pool-small-3338bda2-3c8g",name="k8s_tiller_tiller-deploy-3066893457-jcwwg_kube-system_13ca351f-c484-11e7-bff5-42010af0018b_0",namespace="kube-system",pod_name="tiller-deploy-3066893457-jcwwg"} | 0
container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="small-preemptible",cloud_google_com_gke_preemptible="true",container_name="",device="/dev/root",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/",image="",instance="gke-latency-at-small-preemptible-0c981b61-1x4z",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-small-preemptible-0c981b61-1x4z",name="",namespace="",pod_name=""} | 66386
container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="small-preemptible",cloud_google_com_gke_preemptible="true",container_name="",device="/dev/root",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/",image="",instance="gke-latency-at-small-preemptible-0c981b61-9489",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-small-preemptible-0c981b61-9489",name="",namespace="",pod_name=""} | 66386
container_fs_inodes_free{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_fluentd_ds_ready="true",beta_kubernetes_io_instance_type="g1-small",beta_kubernetes_io_os="linux",cloud_google_com_gke_nodepool="small-preemptible",cloud_google_com_gke_preemptible="true",container_name="",device="/dev/root",failure_domain_beta_kubernetes_io_region="us-east1",failure_domain_beta_kubernetes_io_zone="us-east1-c",id="/",image="",instance="gke-latency-at-small-preemptible-0c981b61-99zh",job="kubernetes-cadvisor",kubernetes_io_hostname="gke-latency-at-small-preemptible-0c981b61-99zh",name="",namespace="",pod_name=""} | 66386

What you expected to happen:
Have these stats return right values..

How to reproduce it (as minimally and precisely as possible):
As far as I can tell, this should happen on a fresh install too.

Environment:

  • Kubernetes version (use kubectl version): 1.8.1
  • Cloud provider or hardware configuration: GKE
@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Nov 13, 2017

Member

/sig node

Member

dims commented Nov 13, 2017

/sig node

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Feb 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Feb 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Feb 12, 2018

Contributor

/remove-lifecycle stale
/lifecycle freeze

Contributor

discordianfish commented Feb 12, 2018

/remove-lifecycle stale
/lifecycle freeze

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 13, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented May 13, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish May 14, 2018

Contributor

/remove-lifecycle stale
/lifecycle freeze

Contributor

discordianfish commented May 14, 2018

/remove-lifecycle stale
/lifecycle freeze

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Aug 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Aug 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Aug 13, 2018

Contributor

I'm tired of this.

Contributor

discordianfish commented Aug 13, 2018

I'm tired of this.

@bpownow

This comment has been minimized.

Show comment
Hide comment
@bpownow

bpownow Aug 21, 2018

@discordianfish I'm seeing similar behavior. Let me know if you've resolved it?

bpownow commented Aug 21, 2018

@discordianfish I'm seeing similar behavior. Let me know if you've resolved it?

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Aug 21, 2018

Contributor

@bpownow Unfortunately I haven't found a fix. Right now I'm just not looking at these metrics. If I had to fix them, I'd probably look into running cadvisor standalone or revive my container_exporter

Contributor

discordianfish commented Aug 21, 2018

@bpownow Unfortunately I haven't found a fix. Right now I'm just not looking at these metrics. If I had to fix them, I'd probably look into running cadvisor standalone or revive my container_exporter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment