Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

half precision not supported by CPU? #8

Closed
wolfshow opened this issue Apr 1, 2019 · 7 comments
Closed

half precision not supported by CPU? #8

wolfshow opened this issue Apr 1, 2019 · 7 comments

Comments

@wolfshow
Copy link

wolfshow commented Apr 1, 2019

cannot run the CPU code now.

@levyfan
Copy link
Contributor

levyfan commented Apr 1, 2019

Half precision is not supported by CPU. MKL does not have such half precision BLAS methods.

@wolfshow
Copy link
Author

wolfshow commented Apr 1, 2019

Half precision is not supported by CPU. MKL does not have such half precision BLAS methods.

how to modify the code to be used with CPU?

@levyfan
Copy link
Contributor

levyfan commented Apr 1, 2019

Half precision is not supported by CPU. MKL does not have such half precision BLAS methods.

how to modify the code to be used with CPU?

Could you clarify your question? I am not sure what you are asking.

For running on CPU, only single precision (float) is supported. The reason of no half precision is lying on Intel's MKL. They do not support half precision computing. There is nothing we can do now.

@wolfshow
Copy link
Author

wolfshow commented Apr 1, 2019

For running on CPU, only single precision (float) is supported. The reason of no half precision is lying on Intel's MKL. They do not support half precision computing. There is nothing we can do now.

When I compile the current code base and run the ”cuBERT_benchmark“, I got the error "half precision not supported by CPU". My question is, how to modify the code and make it runnable on CPU? I only need single precision computing.

@levyfan
Copy link
Contributor

levyfan commented Apr 1, 2019

You can simply change

typedef half_float::half Dtype;

to typedef float Dtype;

and change

cuBERT_ComputeType compute_type = cuBERT_COMPUTE_HALF;

to cuBERT_ComputeType compute_type = cuBERT_COMPUTE_FLOAT;

@levyfan
Copy link
Contributor

levyfan commented Apr 1, 2019

If you would like to run your own model, you may also need to give

Dtype logits[batch_size];

more memory space and change output_type from cuBERT_LOGITS to your own model.

@wolfshow
Copy link
Author

wolfshow commented Apr 1, 2019

If you would like to run your own model, you may also need to give

cuBERT/benchmark/benchmark_cu.cpp

Line 39 in 8ab0384

Dtype logits[batch_size];

more memory space and change output_type from cuBERT_LOGITS to your own model.

Thanks a lot!

@levyfan levyfan closed this as completed May 31, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants