Skip to content
Analyzes resource usage and performance characteristics of running containers.
Go JavaScript Shell HTML Python Makefile
Branch: master
Clone or download
dashpole Merge pull request #2384 from dims/consolidate-code-that-depends-on-d…
…ocker/docker

moving docker/utils into container/docker
Latest commit f1ef936 Jan 23, 2020
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
accelerators Move from glog to klog Nov 8, 2018
api Move from glog to klog Nov 8, 2018
build Multi-arch containerized build for Cadvisor (#2323) Dec 4, 2019
cache Support multiple storage backends Apr 10, 2019
client Move from glog to klog Nov 8, 2018
collector update testify dependency Nov 18, 2017
container moving docker/utils into container/docker Jan 23, 2020
deploy Multi-arch containerized build for Cadvisor (#2323) Dec 4, 2019
devicemapper Move from glog to klog Nov 8, 2018
docs Update runtime_options.md Jan 16, 2020
events Move from glog to klog Nov 8, 2018
fs moving docker/utils into container/docker Jan 23, 2020
healthz Fix imported package names to not use mixedCaps or under_scores Oct 22, 2015
http add prometheus metrics timestamp Dec 11, 2018
info split advanced tcp metrics from original tcp metrics as advtcp Jan 7, 2020
integration fix code safe linter errors Apr 25, 2019
machine Update machine.go Jan 13, 2020
manager Fix blocking select on container stop. Dec 10, 2019
metrics split advanced tcp metrics from original tcp metrics as advtcp Jan 7, 2020
pages update assets to fix e2e tests Jan 6, 2020
storage Add timestamp support in stdout storage driver Dec 20, 2019
summary Export type to calculate percentiles Jul 22, 2015
utils moving docker/utils into container/docker Jan 23, 2020
validate cgroup: initial support for cgroups v2 Sep 6, 2019
vendor Switch from using docker/docker to k8s.io/utils for mount Jan 22, 2020
version Simplify cAdvisor release versioning Jun 30, 2016
watcher Drop support for rkt - which is now archived Oct 2, 2019
zfs Move from glog to klog Nov 8, 2018
.gitignore Gitignore Files generated by JetBrains IDEs Mar 18, 2017
AUTHORS Remove mention of contributors file. We don't have one. Dec 30, 2014
CHANGELOG.md changelog for v0.35.0 Nov 27, 2019
CONTRIBUTING.md Add CONTRIBUTING.md Jun 10, 2014
LICENSE Migrating cAdvisor code from lmctfy Jun 9, 2014
Makefile Merge pull request #2289 from purebluesong/patch-1 Jan 8, 2020
README.md Fix deprecated image reference in readme Dec 5, 2019
cadvisor.go split advanced tcp metrics from original tcp metrics as advtcp Jan 7, 2020
cadvisor_test.go split advanced tcp metrics from original tcp metrics as advtcp Jan 7, 2020
go.mod Switch from using docker/docker to k8s.io/utils for mount Jan 22, 2020
go.sum Switch from using docker/docker to k8s.io/utils for mount Jan 22, 2020
logo.png Run PNG crusher on logo.png Feb 10, 2016
storagedriver.go Support multiple storage backends Apr 10, 2019
test.htdigest Added HTTP Auth and HTTP Digest authentication #302 Dec 11, 2014
test.htpasswd Added HTTP Auth and HTTP Digest authentication #302 Dec 11, 2014

README.md

cAdvisor

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We strive for support across the board so feel free to open an issue if that is not the case. cAdvisor's container abstraction is based on lmctfy's so containers are inherently nested hierarchically.

cAdvisor

Quick Start: Running cAdvisor in a Docker Container

To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run:

sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  gcr.io/google-containers/cadvisor:latest

cAdvisor is now running (in the background) on http://localhost:8080. The setup includes directories with Docker state cAdvisor needs to observe.

Note: If you're running on CentOS, Fedora, or RHEL (or are using LXC), take a look at our running instructions.

We have detailed instructions on running cAdvisor standalone outside of Docker. cAdvisor running options may also be interesting for advanced usecases. If you want to build your own cAdvisor Docker image, see our deployment page.

For Kubernetes users, cAdvisor can be run as a daemonset. See the instructions for how to get started, and for how to kustomize it to fit your needs.

Building and Testing

See the more detailed instructions in the build page. This includes instructions for building and deploying the cAdvisor Docker image.

Exporting stats

cAdvisor supports exporting stats to various storage plugins. See the documentation for more details and examples.

Web UI

cAdvisor exposes a web UI at its port:

http://<hostname>:<port>/

See the documentation for more details.

Remote REST API & Clients

cAdvisor exposes its raw and processed stats via a versioned remote REST API. See the API's documentation for more information.

There is also an official Go client implementation in the client directory. See the documentation for more information.

Roadmap

cAdvisor aims to improve the resource usage and performance characteristics of running containers. Today, we gather and expose this information to users. In our roadmap:

  • Advise on the performance of a container (e.g.: when it is being negatively affected by another, when it is not receiving the resources it requires, etc).
  • Auto-tune the performance of the container based on previous advise.
  • Provide usage prediction to cluster schedulers and orchestration layers.

Community

Contributions, questions, and comments are all welcomed and encouraged! cAdvisor developers hang out on Slack in the #sig-node channel (get an invitation here). We also have the kubernetes-users Google Groups mailing list.

You can’t perform that action at this time.