Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TF GPU compute #28

Open
vlad17 opened this issue Jul 15, 2017 · 3 comments
Open

TF GPU compute #28

vlad17 opened this issue Jul 15, 2017 · 3 comments
Assignees

Comments

@vlad17
Copy link

vlad17 commented Jul 15, 2017

The python 3 GPU docker file specifies ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7, which is the compute capability for the K80s. AWS also has g3's M80 cards, which have compute capabilities 5.2. Could that line be changed to ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2 so that the TF that's built is optimized for all AWS GPU offerings?

See nvidia for listing

@houqp
Copy link
Contributor

houqp commented Jul 20, 2017

Sure, I will update this on the next release :)

@vlad17
Copy link
Author

vlad17 commented Feb 4, 2018

thanks! i've been seeing a related problem now:

2018-02-04 22:39:16.960722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1093] Ignoring visible gpu device (device: 0, name: Tesla M60, pci bus id: 0000:00:1b.0, compute capability: 5.2) with Cuda compute capability 5.2. The minimum required Cuda capability is 7.0.

This stems from the same issue (on the dl/tensorflow/1.4.0/Dockerfile-py3.gpu.cuda9cudnn7_aws dockerfile)

ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,7.0 should perhaps be ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2,7.0?

@ReDeiPirati ReDeiPirati self-assigned this Feb 5, 2020
@ReDeiPirati
Copy link
Contributor

Hi @vlad17, sorry for the late reply,

I've just labeled this issue as a feature request, we will add this in the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants