New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: how do you recommend installing CUDA on the host OS? #6

Closed
tleyden opened this Issue Nov 23, 2015 · 5 comments

Comments

Projects
None yet
3 participants
@tleyden
Copy link

tleyden commented Nov 23, 2015

I ran into an issue with a minor mismatch in the CUDA versions if I installed CUDA on the host using these instructions and then trying to run the kaixhin/cuda docker image.

Any advice here?

@Kaixhin

This comment has been minimized.

Copy link
Owner

Kaixhin commented Nov 23, 2015

With 221dd24 I added a note to the CUDA section of the README which notes that the recommended way of installing the right driver version is to use the .run file (so that people don't have to always update their driver when new images are built / the .run files have been in my experience more reliable). I've also added a link to this from every single current CUDA-based Dockerfile, so people will always be linked to the most up-to-date information. If you have any additions to the instructions then please let me know, or otherwise close this issue.

As mentioned in #7 (comment), this driver mismatch will be sorted once NVIDIA release their official images with their Docker plugin.

@tleyden

This comment has been minimized.

Copy link

tleyden commented Nov 23, 2015

Thanks! I think this answers my question, so I'll close the issue.

One follow up question:

once NVIDIA release their official images with their Docker plugin.

Isn't that already available here?

https://github.com/NVIDIA/nvidia-docker

(I was notified about this, but haven't had a chance to actually look at it too closely)

@tleyden tleyden closed this Nov 23, 2015

@Kaixhin

This comment has been minimized.

Copy link
Owner

Kaixhin commented Nov 24, 2015

I suggest looking at the currently open issues for more details but this is still in the experimental stage. Right now they rely on a shell script which acts as a wrapper for Docker; the proper way to interact with Docker is to write a Docker plugin, which they are developing now. Once that is ready then they will probably be ready to do an official release on the Docker Hub, which I can then use as my base CUDA image for the various machine learning images.

@victorhcm

This comment has been minimized.

Copy link

victorhcm commented May 23, 2016

Hey @Kaixhin, it seems NVIDIA has released a plugin and the CUDA images:

I ran a small test, and it seems to be working fine. I thought that it could work out-of-the-box with your CUDA images, but unfortunetely it doesn't. I'm wondering if it is already mature enough to update our images.

@Kaixhin

This comment has been minimized.

Copy link
Owner

Kaixhin commented May 23, 2016

@victorhcm Actually I think I got it working with kaixhin/cuda-torch, but I haven't tested it extensively. I've tried to set everything up in the nvidia branch, and I'm basically waiting for an actual release (as they should be leaving beta soon; see NVIDIA/nvidia-docker#82).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment