New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA error #355
Comments
It's possible that this is a version mismatch between CUDA and the installed pyTorch. Can you check that the major CUDA version reported by |
The main issue here is:
We should fix this by creating a single-valued tensor and calling Though I thought we were already doing this... @StpMax were you specifying |
@George3d6 auto detect. I checked with
pip show torch
|
According to this, PyTorch will report CUDA availability even if the GPU is no longer supported, as is the case for the GTX660. @George3d6 indeed we already do that here, so I guess a solution is to set the minimum supported compute capability to 3.7 as stated in the PyTorch issue discussion, thoughts? |
@StpMax can you please try the branch |
@paxcema same issue |
Fixed in #359, closing |
I have old GPU (geforce 660), so assume cuda should not be used during predictor training, but in log i see:
Training finish well, predictor is queryable.
The text was updated successfully, but these errors were encountered: