-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
half precision not supported by CPU? #8
Comments
Half precision is not supported by CPU. MKL does not have such half precision BLAS methods. |
how to modify the code to be used with CPU? |
Could you clarify your question? I am not sure what you are asking. For running on CPU, only single precision (float) is supported. The reason of no half precision is lying on Intel's MKL. They do not support half precision computing. There is nothing we can do now. |
When I compile the current code base and run the ”cuBERT_benchmark“, I got the error "half precision not supported by CPU". My question is, how to modify the code and make it runnable on CPU? I only need single precision computing. |
You can simply change cuBERT/benchmark/benchmark_cu.cpp Line 8 in 8ab0384
to typedef float Dtype;
and change cuBERT/benchmark/benchmark_cu.cpp Line 11 in 8ab0384
to cuBERT_ComputeType compute_type = cuBERT_COMPUTE_FLOAT;
|
If you would like to run your own model, you may also need to give cuBERT/benchmark/benchmark_cu.cpp Line 39 in 8ab0384
more memory space and change output_type from cuBERT_LOGITS to your own model.
|
Thanks a lot! |
cannot run the CPU code now.
The text was updated successfully, but these errors were encountered: