Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't find nvidia-smi in 2.1.4-py3-tf-gpu #7

Open
jayavanth opened this issue Feb 20, 2018 · 2 comments
Open

Can't find nvidia-smi in 2.1.4-py3-tf-gpu #7

jayavanth opened this issue Feb 20, 2018 · 2 comments

Comments

@jayavanth
Copy link

$ docker run --runtime=nvidia -it --rm gw000/keras:2.1.4-py3-tf-gpu nvidia-smi 
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"nvidia-smi\": executable file not found in $PATH": unknown.
@gw0
Copy link
Owner

gw0 commented Feb 27, 2018

In the base image for GPU support (https://github.com/gw0/docker-debian-cuda) there was a major refactoring due to driver compatibility issues in some situations. The new image doesn't come anymore with the CUDA Driver libraries and tools (anything specific to your CUDA Kernel module).

This includes nvidia-smi. Can you check, if it gets injected anywhere else when using the OCI runtime nvidia? Something in the lines of:

$ docker run --runtime=nvidia -it --rm gw000/keras:2.1.4-py3-tf-gpu bash
$ find / -iname '*nvidia-smi*'

@jayavanth
Copy link
Author

I see. There was no nvidia-smi in that image. Looks like nvidia has not released any official docker image for debian: https://hub.docker.com/r/nvidia/cuda/tags/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants