-
Notifications
You must be signed in to change notification settings - Fork 2k
error #35 - installed CUDA driver version older than runtime even though 361.42 driver is installed? #191
Comments
Looks like your driver wasn't installed properly. How did you install it? |
It's Ubuntu 15.10 (GNU/Linux 4.2.0-42-generic x86_64), this is what I did from the beginning: $ sudo apt-get update adding the following lines: blacklist nouveau save and quit $echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf I may have had to re-run the nvidia installer again at this stage. (exactly the same 2 lines as before) And finally made sure both docker and nvidia-docker-plugin services are up: $service nvidia-docker status And as mentioned above, the nvidia/cuda docker is able to run nvidia-smi and show the GPU and driver versions show as expected, and beniz/deepdetect_gpu does seem to work properly with the GPU. |
What's the the output of |
|
Hmm. I just got a similar unexpected error while playing with a Torch-based docker image.
|
... but a DIGITS image and an NVcaffe image work fine? Not sure what's happening here. |
@3XX0 helped me figure out my problem. I was trying to use CUDA while building the image, but it's not available yet. When I changed the last step in my Dockerfile from a @Motherboard what does this command do for you?
|
nvidia-docker doesn't like it when I don't give it all the volumes declared in the docker file, so
gives
gives
|
I don't know what was previously wrong, but I've tried running digits again, and it seems to be fine... Can't reproduce the error... |
I have pain a lot trying to use my GTX 860M in a Lenovo Y70 machine with i7 and intel integrated graphics card and one error is quite similar to the ones you are getting . I discover this regarding how to activate nvidia before any try to access to it thru drivers... Just for giving you ideas: Just to open a possible solution path: ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) cudaGetDeviceCount returned 35 But if I try with $optirun NVIDIA_CUDA-8.0_Samples/bin/x86_64/linux/release/deviceQuery CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 860M" deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 860M That make me think all my problems are related to the way I invoque programs . Now I'm investigating how to make it work with torch for recurrent neural networks but with GPU.... |
Using EC2 Amazon machine, with nvidia drivers version 361.42, and nvidia-docker, nvidia-docker-plugin installed and running.
running latest DIGITS (4.0) shows in the log:
nvidia-docker volume ls on my machine shows
there are no CUDA bin files (e.g. deviceQuery or nvidia-smi) that I could find in the DIGITS docker, but running
nvidia-docker run --rm nvidia/cuda nvidia-smi
results in
trying to nvidia-docker build a dockerfile based on nvidia/cuda:7.0-cudnn4-devel-ubuntu14.04 which clones the master branch of caffe and compiles it with cudnn enabled fails on the beginning of testing with the following error:
But oddly enough, beniz/deepdetect_gpu does seem to work properly with the GPU...
Any Ideas?
The text was updated successfully, but these errors were encountered: