Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] merge_amplitude_embedding does not support a batch dimension #4333

Closed
akatief opened this issue Jul 7, 2023 · 2 comments · Fixed by #4353
Closed

[BUG] merge_amplitude_embedding does not support a batch dimension #4333

akatief opened this issue Jul 7, 2023 · 2 comments · Fixed by #4353
Labels
bug 🐛 Something isn't working

Comments

@akatief
Copy link

akatief commented Jul 7, 2023

Feature details

Make merge_amplitude_embedding work with batches.

When feeding 2D tensors to merge_amplitude_embedding, it performs a Kronecker product under the hood, also expanding the batch dimension instead of just the embedding one, resulting in an intermediate tensor of size (batch_size^2, embedding_dim^2). This makes any circuit relying on separate amplitude embeddings fail in a batch context.

Implementation

The merge_amplitude_embedding function could be changed to call einsum instead of kron.

This is a snippet implementing a similar behavior in PyTorch.
# a and b are the two 2D tensors you want to encode separately torch.einsum('nk,nl->nkl',a,b).reshape(a.shape[0],-1)

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

No response

@akatief akatief added the enhancement ✨ New feature or request label Jul 7, 2023
@albi3ro
Copy link
Contributor

albi3ro commented Jul 11, 2023

Thanks for opening this issue @akatief . This is definitely something we could fix, and we are looking into adding it in.

Just to add some extra context for the issue, I put together a minimum example:

@qml.qnode(qml.device('default.qubit', wires=2))
@qml.transforms.merge_amplitude_embedding
def circuit(m1, m2):
    qml.AmplitudeEmbedding(m1, wires = 0)
    qml.AmplitudeEmbedding(m2, wires = 1)
    return  qml.expval(qml.PauliZ(0))

m1 = np.array([[1,0], [0, 1]])
m2 = np.array([[0,1], [1,0]])

Right now we have:

>>> circuit(m1, m2)
tensor([ 1.,  1., -1., -1.], requires_grad=True)

Which is an effective batch dimension of 4, not two.

Instead, the circuit should have a batch dimension of 2.

I would almost call this a bug actually.

@eddddddy
Copy link
Contributor

I would also consider this a bug since it falls in the category "things we should support broadcasting for but don't".
Luckily, the fix in this case seems to be straightforward.

@eddddddy eddddddy added bug 🐛 Something isn't working and removed enhancement ✨ New feature or request labels Jul 12, 2023
@eddddddy eddddddy changed the title Multiple batched amplitude embedding [BUG] The merge_amplitude_embedding does not support a batch dimension Jul 12, 2023
@eddddddy eddddddy changed the title [BUG] The merge_amplitude_embedding does not support a batch dimension [BUG] merge_amplitude_embedding does not support a batch dimension Jul 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants