Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is cuDNN/BLAS, MKL, Neon all BLAS compatible? #2397

Closed
wangkuiyi opened this issue Jun 6, 2017 · 3 comments
Closed

Is cuDNN/BLAS, MKL, Neon all BLAS compatible? #2397

wangkuiyi opened this issue Jun 6, 2017 · 3 comments
Assignees

Comments

@wangkuiyi
Copy link
Collaborator

If we represent tensors in the same memory footprint, is it correct that we can call cuDNN, MKL, and Neon to operate tensors on various devices?

@wangkuiyi wangkuiyi self-assigned this Jun 6, 2017
@wangkuiyi
Copy link
Collaborator Author

wangkuiyi commented Jun 6, 2017

OK, one cent is that BLAS just defined some very primitive and frequently used operations. But all these toolkits are much more powerful than BLAS.

@gangliao
Copy link
Contributor

gangliao commented Jun 7, 2017

memory layout should be specified if using third-party libs, like MKL, cuDNN, OpenBlas. All of them supports column major, thus that's not a problem.

In Majel, the internal implementation of gemm(Array, ...) also invoked cublas_gemm(Array.ptr()) and MKL's cblas_gemm(Array.ptr()).

@wangkuiyi

Thus, it's definitely compatible.

@wangkuiyi
Copy link
Collaborator Author

Thanks to @gangliao

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants