New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests fail with float16 on OpenCL while they should work. #492
Comments
Logs of the Debian CI are available here |
Is there a way to detect whether the current device is float16 or CUDA capable within |
We only support well one type of float16 called float16 storage. That mean
we store on the GPU RAM on float16, but the computation is still done in
float32. So all CUDA capabable GPU support it.
float16 computation aren't well supported except via the cudnn library
wrapper in Theano. NVIDIA don't recommand to use it. It is hard to use and
they come up with mixed precission in Volta that will be easier to use as
it loose less precission.
…On Thu, Aug 17, 2017 at 1:24 PM Ghislain Antony Vaillant < ***@***.***> wrote:
Is there a way to detect whether the current device is float16 or CUDA
capable within pygpu?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#492 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AALC-70YPAphzr6xPC2zaFug9-Yj2mjrks5sZHc7gaJpZM4OlCvz>
.
|
Ok, so that factors to detecting whether the current device is CUDA capable. The reason I am asking is because, assuming such function to detect CUDA capability exists in |
I don't know if float16 storage is supported in OpenCL or not. I'll let
@abergeron answer that.
But sadly we don't have time for better OpenCL support. Contribuation are
always welcome.
…On Thu, Aug 17, 2017 at 2:06 PM Ghislain Antony Vaillant < ***@***.***> wrote:
So all CUDA capabable GPU support it.
Ok, so that factors to detecting whether the current device is CUDA
capable. The reason I am asking is because, assuming such function to
detect CUDA capability exists in pygpu, then it could be used together
with unittest.skipif for skipping the failing tests on the Debian CI
(which uses pocl).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#492 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AALC-97tQpQNsjAMtcxy_MfqAVBxbHnjks5sZIEvgaJpZM4OlCvz>
.
|
Hmmm, that's not what I am asking. Is there a way to call some function in |
As far as I know float16 storage is part of the core requirements for OpenCL since 1.0. The problem you are getting here is not because the device doesn't support float16, but most probably because there was a compilation error in the kernel, which is a problem that we have to fix. You can try a Debug build to get more information out of this error (Including a source dump and the compiler error log). |
Same context as in #491, but with
float16
compute support. Running the tests onpocl
produces the following errors:Again, perhaps
pytest
skipif could help here.The text was updated successfully, but these errors were encountered: