-
Notifications
You must be signed in to change notification settings - Fork 2.5k
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
int8 is not supported in GpuAdvancedIncSubtensor #6627
Comments
Yes, that doesn't work. There are no atomic int8 operations so we probably will not support it. The fix would be to use a larger dtype or run that node on the CPU. |
Although, we should probably teach the optimizer to not move those cases to the GPU so I'll leave the issue open until that is fixed. |
Does #6628 fix your issue? Although it might be better in some cases to leave the operation on the CPU since it could be faster sometimes. The non-dev20 version of the kernel can be quite slow. |
@abergeron, thanks. Will check #6628 in next few days. |
Thanks, it worked! Should I close the issue now? |
No, that's ok, we will close it when the fix is merged. |
The fix is merged. thanks. |
I'm working on https://github.com/pcyin/NL2code and it is failing with certain options on GPU (pcyin/NL2code#6):
Obvious fix like in https://github.com/akhavr/Theano/blob/master/theano/gpuarray/subtensor.py doesn't work because of type mismatch
The text was updated successfully, but these errors were encountered: