gom
is a CLI tool that displays a human-readable table with GPU usage information. Think nvidia-smi
, but minimalist and pretty.
It also shows per-container GPU usage information if Docker is installed.
Use the package manager pip to install gom
.
gom show
displays a table with GPU usage information.
gom watch
displays a table with GPU usage information and updates it every second.
Compare the output of gom show
and nvidia-smi
. I hope you'll agree that gom
produces more clear and helpful output (ex. it breaks usage down across the 4 running Docker containers), while nvidia-smi
is long and complex (I couldn't even screenshot the whole thing).
You may need to install a different version of pynvml
depending on your CUDA version.