Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated applications of the torch interface result in an error when using templates #1210

Closed
trbromley opened this issue Apr 13, 2021 · 5 comments · Fixed by #1223
Closed
Assignees
Labels
bug 🐛 Something isn't working

Comments

@trbromley
Copy link
Contributor

Consider the following code:

import torch
import pennylane as qml

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

def circuit(weights):
    qml.templates.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return qml.expval(qml.PauliZ(wires=0))

qnode = qml.QNode(circuit, dev, interface="torch")

weights = torch.ones((3, 2))
qnode(weights)

qnode.to_torch()

Here, we are creating a QNode that contains BasicEntanglerLayers - an operation that was previously a tape (changed in #1138). We then evaluate the QNode using qnode(weights), which will cause the contained quantum tape to be converted to the Torch interface. Finally, we convert again to the torch interface in the last line. This results in the error:

TypeError: Cannot create a consistent method resolution
order (MRO) for bases TorchInterface, TorchQuantumTape

Instead, if we were to use a different circuit without templates, the error does not appear:

def circuit(weights):
    for i in range(3):
        for j in range(2):
            qml.RY(weights[i, j], wires=j)
    return qml.expval(qml.PauliZ(wires=0))

Also, the circuit with BasicEntanglerLayers does not cause an error before #1138 was merged in. Hence, the migration from templates to operations has introduced this issue.

Could this be to do with the use of expand() by the templates? The expand() method returns a quantum tape, do we have to be careful about the interface?

Additional information

This issue was identified due to failing tests for the TorchLayer tutorial. This tutorial creates multiple TorchLayers from the same QNode, resulting in to_torch() being called multiple times. We don't typically expect to_torch() to be called explicitly by users.

@trbromley trbromley added the bug 🐛 Something isn't working label Apr 13, 2021
@trbromley
Copy link
Contributor Author

@mariaschuld, adding you since this is related to the template-to-ops refactor. Although, I wonder if this is a problem more generally for operations that are expanded because of not being supported on the device 🤔

@trbromley
Copy link
Contributor Author

Once this is fixed, it would be good to revert the changes of PennyLaneAI/qml#247.

@mariaschuld
Copy link
Contributor

mariaschuld commented Apr 14, 2021

Interesting...

I checked the following three differences between templates (which are now operations) and "original" operations, but none seems to make a difference:

  • In templates we directly define the expand() function, instead of defining decomposition which is called in the parent classes expand() function. But I changed the BasicEntanglingLayers to define a decomposition instead and the same error occurs.
  • Templates sit in another folder, but that does not seem to be the problem either - I moved QFT to the template folder, changed the decompose to an expand function and the code example runs fine... But for another template like StronglyEntanglingLayers the error persists.
  • Templates overwrite the __init__ method of the Operation class. But adding this method to QFT does not cause an error, so it can't be the culprit either.

It must be a very strange edge case.

@josh146
Copy link
Member

josh146 commented Apr 16, 2021

Removing the QNode, to make the non-working example even more minimal:

import torch
import pennylane as qml
from pennylane.interfaces.torch import TorchInterface

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

weights = torch.ones((3,))

with TorchInterface.apply(qml.tape.QuantumTape()) as tape:
    qml.U3(*weights, wires=0)
    qml.expval(qml.PauliZ(wires=0))

tape = tape.expand()

res = tape.execute(dev)
print(res)

TorchInterface.apply(tape)  # this line errors with the same error message as above
res = tape.execute(dev)
print(res)

@josh146
Copy link
Member

josh146 commented Apr 16, 2021

I solved the issue in #1223 - it turns out this wasn't related to the templates refactor, the templates refactor just caused this edge case to occur 🙂 Applying the torch interface twice would error with any expanded/decomposed operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
3 participants