Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimum Cuda capability is 3.5? But, 3.0 stated on site #17445

Closed
tharindu-mathew opened this Issue Mar 5, 2018 · 8 comments

Comments

Projects
None yet
5 participants
@tharindu-mathew
Copy link

tharindu-mathew commented Mar 5, 2018

I'm able to run the hello world examples, but the following warning (or error) is printed. So, I can't make use of my gpu? While this maybe a simple correction on the web page, is there anyway I can get a version that allows me to run with a Cuda 3.0 card?

OS: Ubuntu 16.04
GPU: K2000M

On the linux installation page, the minimum capability is written as 3.0. But, when I try to run hello world on a cuda 3.0 card, the following is printed:

name: Quadro K2000M major: 3 minor: 0 memoryClockRate(GHz): 0.745
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 977.81MiB
2018-03-05 13:43:54.533246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1283] Ignoring visible gpu device (device: 0, name: Quadro K2000M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.

@yongtang

This comment has been minimized.

Copy link
Member

yongtang commented Mar 5, 2018

I think minimal capability is 3.5 for binary install. It might still be possible to support 3.0 when building from source.

Create a PR #17448 for the doc fix.

@tharindu-mathew

This comment has been minimized.

Copy link
Author

tharindu-mathew commented Mar 5, 2018

Ok, is this a regression? As per this thread this should be available?#25

@rohan100jain

This comment has been minimized.

Copy link
Member

rohan100jain commented Apr 4, 2018

I think in the thread its mentioned that you might have to change some lines in common_runtime/gpu/gpu_device.cc for this to work. By default minimum is 3.5

@tharindu-mathew

This comment has been minimized.

Copy link
Author

tharindu-mathew commented Apr 4, 2018

It can be compiled from source with CUDA capability 3.0. It is working for me this way.

@JoshuaC3

This comment has been minimized.

Copy link

JoshuaC3 commented May 10, 2018

@mackiem did you have to make the changes suggested by @rohan100jain before building from source?

@tharindu-mathew

This comment has been minimized.

Copy link
Author

tharindu-mathew commented May 10, 2018

@pabx06

This comment has been minimized.

Copy link

pabx06 commented May 12, 2018

Hello i have a CUDA Compute Capability of 2.1 is there a chance i am able to change the source code to support my GPU lower capability? Or the tensorflow's algo need a specific capability ?

Cause CPU take way too long: been training the model for more than 48h so far on CPU
Likely i will be dying from old age before i can evaluate the inception model for my needs

  • 19%-30% validation accuracy
  • 1.5 m training step
  • model : inceptions-v3 transfert learning & retraining of classification layer
  • dataset 6GB
  • 130 classes, ...
  • GPU:Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
@tharindu-mathew

This comment has been minimized.

Copy link
Author

tharindu-mathew commented May 15, 2018

Quite sure, 2.1 is too old for a lot of functionality and acceleration. Sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.