Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] CNOT and PauliZ do not work with large batched states and tensorflow #4892

Closed
1 task done
timmysilv opened this issue Nov 28, 2023 · 0 comments · Fixed by #4889
Closed
1 task done

[BUG] CNOT and PauliZ do not work with large batched states and tensorflow #4892

timmysilv opened this issue Nov 28, 2023 · 0 comments · Fixed by #4889
Labels
bug 🐛 Something isn't working

Comments

@timmysilv
Copy link
Contributor

Expected behavior

When using tensorflow, batched data, and states with 8 or more wires, I don't expect any error to be raised.

Actual behavior

A cryptic error is raised.

Additional information

First reported on the forum

Source code

import pennylane as qml
from pennylane import numpy as np

import tensorflow as tf
from pennylane.templates.embeddings import AmplitudeEmbedding

dev = qml.device("default.qubit", wires=8)

@qml.qnode(dev, interface="tf")
def ancillary_qcnn_circuit(inputs):
    AmplitudeEmbedding(features=inputs, wires=range(4), normalize=True)    
    qml.CNOT(wires=[0,1])
    qml.PauliZ(1)
    qml.Toffoli(wires=[0, 2, 4])
    qml.Toffoli(wires=[0, 2, 5])
    qml.Toffoli(wires=[0, 2, 6])
    qml.Toffoli(wires=[0, 2, 7])
    return [qml.expval(qml.PauliZ(i)) for i in range(4, 8)]

params = np.random.rand(3,16)
ancillary_qcnn_circuit(tf.Variable(params))

Tracebacks

File "/var/folders/w9/9mnrk11j25b3klztdzb01bqr0000gq/T/ipykernel_86620/2335111251.py", line 23, in <module>
    ancillary_qcnn_circuit(tf.Variable(params))
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/qnode.py", line 1030, in __call__
    res = qml.execute(
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/interfaces/execution.py", line 631, in execute
    results = inner_execute(tapes)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/interfaces/execution.py", line 252, in inner_execute
    return cached_device_execution(tapes)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/interfaces/execution.py", line 374, in wrapper
    res = list(fn(tuple(execution_tapes.values()), **kwargs))
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/devices/default_qubit.py", line 478, in execute
    results = tuple(
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/devices/default_qubit.py", line 479, in <genexpr>
    simulate(
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/devices/qubit/simulate.py", line 228, in simulate
    state, is_state_batched = get_final_state(circuit, debugger=debugger, interface=interface)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/devices/qubit/simulate.py", line 123, in get_final_state
    state = apply_operation(op, state, is_state_batched=is_state_batched, debugger=debugger)
  File "/Users/matthews/.pyenv/versions/3.9.13/lib/python3.9/functools.py", line 888, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/devices/qubit/apply_operation.py", line 257, in apply_cnot
    return apply_operation_tensordot(op, state)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/devices/qubit/apply_operation.py", line 133, in apply_operation_tensordot
    tdot = math.tensordot(reshaped_mat, state, axes=axes)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/math/multi_dispatch.py", line 151, in wrapper
    return fn(*args, **kwargs)
  File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/math/multi_dispatch.py", line 389, in tensordot
    return np.tensordot(tensor1, tensor2, axes=axes, like=like)
  File "/Users/matthews/.pyenv/versions/3.9.13/envs/pl/lib/python3.9/site-packages/autoray/autoray.py", line 80, in do
    return get_lib_fn(backend, fn)(*args, **kwargs)
  File "/Users/matthews/.pyenv/versions/3.9.13/envs/pl/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/Users/matthews/.pyenv/versions/3.9.13/envs/pl/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 5883, in raise_from_not_ok_status
    raise core._status_to_exception(e) from None  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__MatMul_device_/job:localhost/replica:0/task:0/device:CPU:0}} Matrix size-incompatible: In[0]: [4,4], In[1]: [6,128] [Op:MatMul] name:

System information

pl dev

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@timmysilv timmysilv added the bug 🐛 Something isn't working label Nov 28, 2023
timmysilv added a commit that referenced this issue Nov 28, 2023
**Context:**
First reported [on the
forum](https://discuss.pennylane.ai/t/abnormal-operation-of-a-specific-circuit-possibly-caused-by-tf-batch/3712).
`qml.CNOT` uses `apply_operation_tensordot` when that state has >=9 dims
and the interface is tensorflow, but the `is_state_batched` param was
missing (so it always assumed it wasn't batched). While fixing this, I
noticed that it was also missing with PauliZ.

**Description of the Change:**
Pass the correct `is_state_batched` value to `apply_operation_tensordot`
when called from the CNOT and PauliZ apply functions.

**Benefits:**
No more unexpected errors. I think sometimes they would not raise error,
but give unexpected results. That shouldn't happen either now.

**Possible Drawbacks:**
N/A

Fixes #4892
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant