Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help - Missing container's name with latest version #1873

Open
jmirc opened this issue Jan 23, 2018 · 8 comments
Open

Help - Missing container's name with latest version #1873

jmirc opened this issue Jan 23, 2018 · 8 comments

Comments

@jmirc
Copy link

jmirc commented Jan 23, 2018

Hi,

I deployed cAdvisor in a Rancher cluster using the following docker-compose configuration and I am not able to get the container's name
Can you help me to fix the issue?

Info

cadvisor: latest
Rancher: v1.6.13
Docker Version: 1.12.6
Docker API Version: 1.24
Kernel Version: 3.10.0-693.11.6.el7.x86_64
OS Version: CentOS Linux 7 (Core)

docker-compose.yml

  cadvisor:
    privileged: true
    image: google/cadvisor:latest
    stdin_open: true
    volumes:
    - /:/rootfs:ro
    - /var/run:/var/run:rw
    - /cgroup:/cgroup:ro
    - /var/lib/docker/:/var/lib/docker:ro
    tty: true
    ports:
    - 8080:8080/tcp
    labels:
      io.rancher.scheduler.global: 'true'

cAdvisor /metrics

# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version & cadvisor revision.
# TYPE cadvisor_version_info gauge
cadvisor_version_info{cadvisorRevision="1e567c2",cadvisorVersion="v0.28.3",dockerVersion="1.12.6",kernelVersion="3.10.0-693.11.6.el7.x86_64",osVersion="Alpine Linux v3.4"} 1
# HELP container_cpu_load_average_10s Value of container cpu load average over the last 10 seconds.
# TYPE container_cpu_load_average_10s gauge
container_cpu_load_average_10s{id="/"} 0
# HELP container_cpu_system_seconds_total Cumulative system cpu time consumed in seconds.
# TYPE container_cpu_system_seconds_total counter
container_cpu_system_seconds_total{id="/"} 0.98
# HELP container_cpu_usage_seconds_total Cumulative cpu time consumed per cpu in seconds.
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{cpu="cpu00",id="/"} 0.700235834
container_cpu_usage_seconds_total{cpu="cpu01",id="/"} 0.535064255
container_cpu_usage_seconds_total{cpu="cpu02",id="/"} 0.509572365
container_cpu_usage_seconds_total{cpu="cpu03",id="/"} 0.646855628
# HELP container_cpu_user_seconds_total Cumulative user cpu time consumed in seconds.
# TYPE container_cpu_user_seconds_total counter
container_cpu_user_seconds_total{id="/"} 1.17
# HELP container_fs_inodes_free Number of available Inodes
# TYPE container_fs_inodes_free gauge
container_fs_inodes_free{device="/dev/mapper/docker-259:2-1627-c36591c389ae705224d34b153bfc61cda5f7943b15440695af8ad893401783c0",id="/"} 5.240956e+06
container_fs_inodes_free{device="/dev/nvme0n1p1",id="/"} 4.1904433e+07
container_fs_inodes_free{device="shm",id="/"} 956222
container_fs_inodes_free{device="tmpfs",id="/"} 956057
@jmirc jmirc changed the title Help - Missing container's name Help - Missing container's name with prometheus Jan 24, 2018
@jmirc jmirc changed the title Help - Missing container's name with prometheus Help - Missing container's name with latest version Jan 24, 2018
@marcbachmann
Copy link

marcbachmann commented Feb 5, 2018

I hit the same problem; labels are also completely gone.
This might be a regression bug introduced in #1831

I'm using that dockerfile to build and then run cadvisor outside of docker

# Dockerfile
FROM amd64/golang:1.8
RUN go get -d github.com/google/cadvisor
WORKDIR /go/src/github.com/google/cadvisor
RUN git checkout v0.28.3
RUN GO_CMD=build GOARCH=$ARCH ./build/build.sh && cp cadvisor /bin/cadvisor

@dashpole
Copy link
Collaborator

dashpole commented Feb 5, 2018

I can't seem to reproduce the issue on head.
$ docker run -d --name=abc123 busybox /bin/sh -c 'sleep 600'
69b7c6640be01a52df751e5d6e45341fbfc7297bd86d3561414bde3201dbe048

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69b7c6640be0 busybox "/bin/sh -c 'sleep..." 3 seconds ago Up 2 seconds abc123

$ curl localhost:8080/metrics | grep abc123 | grep container_cpu_usage_seconds_total
container_cpu_usage_seconds_total{container_label_annotation_io_kubernetes_container_hash="",container_label_annotation_io_kubernetes_container_ports="",container_label_annotation_io_kubernetes_container_restartCount="",container_label_annotation_io_kubernetes_container_terminationMessagePath="",container_label_annotation_io_kubernetes_container_terminationMessagePolicy="",container_label_annotation_io_kubernetes_pod_terminationGracePeriod="",container_label_annotation_kubernetes_io_config_hash="",container_label_annotation_kubernetes_io_config_seen="",container_label_annotation_kubernetes_io_config_source="",container_label_annotation_kubernetes_io_created_by="",container_label_annotation_scheduler_alpha_kubernetes_io_critical_pod="",container_label_component="",container_label_controller_revision_hash="",container_label_io_kubernetes_container_logpath="",container_label_io_kubernetes_container_name="",container_label_io_kubernetes_docker_type="",container_label_io_kubernetes_pod_name="",container_label_io_kubernetes_pod_namespace="",container_label_io_kubernetes_pod_uid="",container_label_io_kubernetes_sandbox_id="",container_label_k8s_app="",container_label_kubernetes_io_cluster_service="",container_label_maintainer="",container_label_name="",container_label_pod_template_generation="",container_label_pod_template_hash="",container_label_tier="",container_label_version="",cpu="cpu00",id="/docker/69b7c6640be01a52df751e5d6e45341fbfc7297bd86d3561414bde3201dbe048",image="busybox",name="abc123"} 0.032998774

@dashpole
Copy link
Collaborator

dashpole commented Feb 5, 2018

The initial issue looks like it is only recognizing the root cgroup. Maybe there is a different issue?
@marcbachmann, can you run the experiment I ran?

@alrf
Copy link

alrf commented Mar 26, 2018

I have the same issue - there are no container names.

cadvisor_version_info{cadvisorRevision="1e567c2",cadvisorVersion="v0.28.3",dockerVersion="17.12.0-ce",kernelVersion="4.4.30-32.54.amzn1.x86_64",osVersion="Alpine Linux v3.4"} 1
# HELP container_cpu_load_average_10s Value of container cpu load average over the last 10 seconds.
# TYPE container_cpu_load_average_10s gauge
container_cpu_load_average_10s{id="/"} 0
# HELP container_cpu_system_seconds_total Cumulative system cpu time consumed in seconds.
# TYPE container_cpu_system_seconds_total counter
container_cpu_system_seconds_total{id="/"} 0.07
# HELP container_cpu_usage_seconds_total Cumulative cpu time consumed per cpu in seconds.
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{cpu="cpu00",id="/"} 0.123321595
container_cpu_usage_seconds_total{cpu="cpu01",id="/"} 0.127031709
container_cpu_usage_seconds_total{cpu="cpu02",id="/"} 0.131574615
container_cpu_usage_seconds_total{cpu="cpu03",id="/"} 0.123025697
# HELP container_cpu_user_seconds_total Cumulative user cpu time consumed in seconds.
# TYPE container_cpu_user_seconds_total counter
container_cpu_user_seconds_total{id="/"} 0.09
# HELP container_fs_inodes_free Number of available Inodes
# TYPE container_fs_inodes_free gauge
container_fs_inodes_free{device="/dev/mapper/docker-202:1-263391-1d5b3b65676d417f5f0a060975be7b4dbc566f700df2cb246ddb832acb1a409e",id="/"} 1.0483326e+07
container_fs_inodes_free{device="/dev/mapper/docker-202:1-263391-d9c3e444740218ff68d35502ac51fcc57f50f69f8d24db86db60067673386f80",id="/"} 1.0436712e+07
container_fs_inodes_free{device="/dev/xvda1",id="/"} 7.812362e+06
container_fs_inodes_free{device="shm",id="/"} 2.05452e+06
container_fs_inodes_free{device="tmpfs",id="/"} 2.054359e+06
# HELP container_fs_inodes_total Number of Inodes
# TYPE container_fs_inodes_total gauge
container_fs_inodes_total{device="/dev/mapper/docker-202:1-263391-1d5b3b65676d417f5f0a060975be7b4dbc566f700df2cb246ddb832acb1a409e",id="/"} 1.0484736e+07
container_fs_inodes_total{device="/dev/mapper/docker-202:1-263391-d9c3e444740218ff68d35502ac51fcc57f50f69f8d24db86db60067673386f80",id="/"} 1.0484736e+07
container_fs_inodes_total{device="/dev/xvda1",id="/"} 7.86432e+06
container_fs_inodes_total{device="shm",id="/"} 2.054521e+06
container_fs_inodes_total{device="tmpfs",id="/"} 2.054521e+06
# HELP container_fs_io_current Number of I/Os currently in progress
# TYPE container_fs_io_current gauge
container_fs_io_current{device="/dev/mapper/docker-202:1-263391-1d5b3b65676d417f5f0a060975be7b4dbc566f700df2cb246ddb832acb1a409e",id="/"} 0
container_fs_io_current{device="/dev/mapper/docker-202:1-263391-d9c3e444740218ff68d35502ac51fcc57f50f69f8d24db86db60067673386f80",id="/"} 0
container_fs_io_current{device="/dev/xvda1",id="/"} 0
container_fs_io_current{device="shm",id="/"} 0
container_fs_io_current{device="tmpfs",id="/"} 0
# HELP container_fs_io_time_seconds_total Cumulative count of seconds spent doing I/Os
# TYPE container_fs_io_time_seconds_total counter
container_fs_io_time_seconds_total{device="/dev/mapper/docker-202:1-263391-1d5b3b65676d417f5f0a060975be7b4dbc566f700df2cb246ddb832acb1a409e",id="/"} 0
container_fs_io_time_seconds_total{device="/dev/mapper/docker-202:1-263391-d9c3e444740218ff68d35502ac51fcc57f50f69f8d24db86db60067673386f80",id="/"} 0
container_fs_io_time_seconds_total{device="/dev/xvda1",id="/"} 5.5168e-05

@jcalderin
Copy link

I am having the same problem: the containerid="\" is consuming a big chunk of disk and can't discover what container is doing it due to... no name/id

@dashpole
Copy link
Collaborator

"/" is the ID of the "root" cgroup, which gives machine stats, and doesn't have a container name. You should expect this to always be present. The issue appears to be that metrics for other cgroups are missing, which could have a variety of different causes.

@giuliohome
Copy link

giuliohome commented Dec 8, 2022

I've written a workaround PR for murre.

An extract of my cadvisor metrics API exporter for your info and consideration.


container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8b8fe30652798a5ae3fc51e66ef681.slice",image="",name="",namespace="kube-system",pod="kube-apiserver-minikube"} 160.61388 1670531477354
container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8b8fe30652798a5ae3fc51e66ef681.slice/docker-5ccbb31a3ebf6bdd2ade25d78cef3adef01ee3efb0a34b125a7e2e7892c60482.scope",image="",name="",namespace="",pod=""} 161.197782 1670531483306
container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8b8fe30652798a5ae3fc51e66ef681.slice/docker-9af0fca83e021ceafda25d14ed5419d85e9f711a30f4c7934429f8c59392f62d.scope",image="",name="",namespace="",pod=""} 0.029894 1670531475346
container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec453326_1d36_4caa_b658_f4cb9615efa0.slice",image="",name="",namespace="default",pod="gke-golang-web-68d985fb4b-g62j5"} 0.075543 1670531476933
container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec453326_1d36_4caa_b658_f4cb9615efa0.slice/docker-85db71d8f846adb067b706d9d447280b6d5da1e03f4c2444f9fe8ba78254e183.scope",image="",name="",namespace="",pod=""} 0.061312 1670531483969
container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec453326_1d36_4caa_b658_f4cb9615efa0.slice/docker-961e1c2a77f10c361bba513bea68b6cb8674a7ff56973abc17c63ba94de50df0.scope",image="",name="",namespace="",pod=""} 0.012094 1670531486129
container_cpu_user_seconds_total{container="",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff0c617d_7957_4be3_8df5_a7f40f2c6c34.slice",image="",name="",namespace="kube-system",pod="metrics-server-769cd898cd-986hc"} 16.994546 1670531480261

@DEvil0000
Copy link

had the same issue when setting the user for the container to something else then root/uid 0.
not setting the user with docker solved it for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants