###Batch TensorNetwork
We consider the scenarios that
- There is a core tensor network as model's weight
$W$ - There are hundreds of tensor network
$B_i$ as input data
Our goal is to calculate the hundreds of inner product
There are three way to make it:
- a naïve
for
loop and try to call parallel processing to speed up - use a
batch
version of the TensorNetwork. - use
einsum
to achieveefficient coding
.
For the batch
version of TensorNetwork, please see our paper(coming soon). The conclusion is very simple, the batch
is another blockdiag
TensorNetwork with larger bond dimension.
The einsum
can hold batch contraction
like Babcd,Babcd->B
which is not a tensor contraction (so it cannot be replicated by tensordot
). It is same as treat the batch
TensorNetwork as sparse matrix and contracting via sparse way.
We now consider a small example, that with MPS Machine Learning. The virtual bond is set 3 and length is set 20.
![benchmark](Batch TensorNetwork.assets/benchmark.png)
Notice:
- usually the
einsum
is provided for modern mathematic package. - the contraction engine for
loop
,batch contraction
istn.contractor.auto
which will calculate a 'optimized' path first (for uniform matrix chain, the path is just from left to right or verses). Theefficient coding
use assigned path: right to left path. Theway_vectorized_map
is parallel version forloop
. - Replicated the result by
python speed_benchmark.py