[BUG] merge_amplitude_embedding
does not support a batch dimension
#4333
Labels
bug 🐛
Something isn't working
Feature details
Make merge_amplitude_embedding work with batches.
When feeding 2D tensors to merge_amplitude_embedding, it performs a Kronecker product under the hood, also expanding the batch dimension instead of just the embedding one, resulting in an intermediate tensor of size (batch_size^2, embedding_dim^2). This makes any circuit relying on separate amplitude embeddings fail in a batch context.
Implementation
The merge_amplitude_embedding function could be changed to call einsum instead of kron.
pennylane/pennylane/transforms/optimization/merge_amplitude_embedding.py
Line 93 in 34d2fb2
This is a snippet implementing a similar behavior in PyTorch.
# a and b are the two 2D tensors you want to encode separately torch.einsum('nk,nl->nkl',a,b).reshape(a.shape[0],-1)
How important would you say this feature is?
2: Somewhat important. Needed this quarter.
Additional information
No response
The text was updated successfully, but these errors were encountered: