-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TF not compatible with AWS GPU instances? #142
Comments
This is correct. We currently do not support AWS GPU instances because of their 3.0 cuda compute capability. You can try disabling that requirement in the source code and seeing if it works for you. |
Is there any plan to support them, i.e. does the tensorflow source actually utilize the new features in cuda 3.5 capability? |
zheng-xq is working on adding configurable support to it. For every compute capability you add, the compile time and binary size increases significantly, so we're trying to find a solution. In any case, thanks for the report -- de-duping with #25. |
- add more examples for affine layout maps showing various use cases - affine map range sizes were removed from code, but examples in LangRef weren't updated Signed-off-by: Uday Bondhugula <uday@polymagelabs.com> Closes tensorflow#142 COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#142 from bondhugula:doc 3291a8784bc69883f7a7cead21445fc8118aaad2 PiperOrigin-RevId: 270548991
It seems like Google Compute Engine doesn't even have gpu instances, and AWS GPU instance isn't supported because it requires >= 3.5 cuda compute capability. Is this correct?
The text was updated successfully, but these errors were encountered: