Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a multidimensional matrix multiply #16

Open
bluss opened this issue Dec 19, 2015 · 6 comments
Open

Add a multidimensional matrix multiply #16

bluss opened this issue Dec 19, 2015 · 6 comments

Comments

@bluss
Copy link
Member

bluss commented Dec 19, 2015

General dimensions, like numpy::dot.

Restricted to float types, for transparent blas support.

@bluss bluss changed the title Add a better matrix multiply Add a multidimensional matrix multiply Mar 13, 2016
@termoshtt
Copy link
Member

NumPy has einsum (Einstein Summation) for effective higher order tensor manipulation.
https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html
I think it is better to implement general einsum for rust-ndarray, and rewrite dot to call specific einsum.
I hope the following pseudo-code works:

let a = Array::<f64, _>::random((7, 5, 10));
let b = Array::<f64, _>::random((10, 5));
let c = einsum!("ijk", "kj" -> "i" | a, b); // results one-dim array with 7 components

@bluss
Copy link
Member Author

bluss commented Nov 15, 2016

Then I'd be curious how you recover the performance of matrix multiplication. ndarray uses crate matrixmultiply which uses a packing strategy and a vectorized kernel.

@termoshtt
Copy link
Member

I saw opt-numpy, which decompose given einsum into more fundamental tensor reduction. This project can detects the case where dot or dgemm can be used, and in this case the performance will be good. I have no idea for another case about calculation performance :<

@bluss
Copy link
Member Author

bluss commented Nov 15, 2016

Thanks for the link. I'd spontaneously say that a good einsum is a project as large as ndarray itself (almost) and would be a thing to build in its own crate.

@ngoldbaum
Copy link

ngoldbaum commented Jul 22, 2019

Would be nice to have more matrix multiply implementations. Even expanding the set of implemented types to include Array3 would be helpful for me, it's currently not straightforward to port numpy code that does vectorized matrix multiplies (e.g. matrix multiplication on stacks of matrices or vectors).

@chenpeizhi
Copy link

chenpeizhi commented Feb 14, 2020

The TACO project looks very promising. It was designed with both dense and sparse tensors in mind, supporting various formats of sparse tensors and block sparse tensors. Its API is similar to Julia TensorOperations or Nim Arraymancer, which is more intuitive than that of NumPy einsum or opt_einsum. I wonder whether there is any plan to interface with TACO or implement similar functionalities.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants