New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] Tensor contractions #23
Comments
Hi @DavidAce, are you only looking for contractions of the feature or everything that einsum can do? |
I was mostly thinking of contractions, as in numpy's |
Hi David, I think einsum might be a little overkill at this point because of the flexibility and overlap with other functions such as permute. However, adding contractions in for the general case is probably a useful feature. The limitations would be whatever the limitations in cuTENSOR are (strides, data types, etc), so we'll investigate these and get back to you. |
Hi @DavidAce, I remember now the reason we didn't include cuTensor was simply that it wasn't included with CUDA and created an extra download for users. We'll discuss the options for this internally. |
+1 |
@cliffburdick Sure! I use tensor contractions to implement algorithms based on matrix product states (MPS) for studying quantum many-body systems in 1D. Typically my contractions involve 2 to 5 dense tensors of rank 1 to 8. These are of type Syntactically, I find that Eigen's approach is nice, with chained contractions using the dot-operator and lazy-evaluation. E.g. A = B.contract(C, ...).contract(D, ...).contract(E, ...) where As the number of tensors grows, it becomes non-trivial to determine the optimal order of contractions. There are several algorithms to finding good orderings, here are some examples. I would not expect order optimization to be a part of a library, and indeed all libraries that I'm aware of leaves this as an exercise to the user. Still, I suspect fellow practitioners would think this is "nice to have", in particular people studying 2D systems (so-called PEPS). At the moment, the contractions I deal with are fairly simple. For instance, the following contraction takes most of the time in my simulations (in tensor diagram notation) : This tensor contraction is used to express the matrix-vector product result = ψ.contract(L, ...).contract(M, ...).contract(R, ...) Depending on the model, the indices can have the following dimensions
A while ago I considered using GPU acceleration using cutensor on RTX 2080 TI, but decided against it despite promising performance. Mostly because I had access to way more CPU power than GPU, but also because the contraction above resulted in quite a lot of code. I felt it would be hard to maintain, prone to human error and take a lot of effort to port. However, more GPUs have become available since (also with native fp64 support), so a high-level library to handle contractions on GPU would definitely make things interesting again by lowering the barrier. For instance, it would be cool to detect if an HPC node has a GPU accelerator available at runtime and use it. |
Hi @DavidAce In case you don't know already, we are working on a new cuTensorNet library (part of the cuQuantum SDK), planned to be released in this December, that might meet just what you need exactly. There is a recent GTC talk on the cuQuantum SDK (not sure if login/registration is required or not). cuTensorNet allows you to create an arbitrary tensor network (be it MPS, PEPS, MERA, or whatever) with the network topology specified by pairwise contractions among the tensors, for which it can
Roughly speaking, the capability of cuTensorNet can be nicely mapped to 1. Now, with regard to the nice introduction you wrote above, I have two questions:
|
Update: cuTensorNet is out, available with both C and Python APIs. See https://docs.nvidia.com/cuda/cuquantum/index.html. |
Hi @DavidAce and @oscarbg, we're happy to report that we've added support for contractions via an To be clear, this is a subset of NumPy's Using your example above, a 3-way contraction would be something like:
|
Closing this one for now. Please open a new issue if you find problems. |
Great work so far on MatX!
I wonder if tensor contractions (aka tensordot or einsum) are in the roadmap for MatX. Until now this has existed in cuTENSOR but it is quite verbose, so it would be great to write tensor contractions using MatX high-level syntax.
The text was updated successfully, but these errors were encountered: