-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torchlayer error when running on GPU - Tensor is on CPU, but expected it to be on GPU #1290
Comments
Hey @zzh237. I am not sure I understand this issue - you seem to refer to some previous thread (i.e., what do you mean by "it still throws the error"?)... Could you please edit and clarify your message, explaining what minimum working example you run, what you expect to see, and what the unexpected behaviour is? Information on your system and PL version will also help speeding up support. I tried to fix your formatting a bit already, but the idea is that issues are self-contained reports of a problem. Thanks! :) |
Hi @mariaschuld , thank you so much for the format! I have updated that. |
Let me try to understand... You run some PennyLane code (please post this code too as a minimum working example - I can only guess that you are using PennyLane in combination with PyTorch?) and the code is supposed to be executed on your GPU. But when you run the code you get an error, which you can fix by changing those lines you show, i.e. by sending an object in the vjp calculation to the GPU? (I am not an expert on this, so I am slightly confused why creating a variable We are really keen to improve PennyLane's GPU capabilities, so happy to try and consider any changes. But we definitely need more context to help here! |
Here is the code: n_qubits = 4 # Number of qubits
q_depth = 2 # Depth of the quantum circuit (number of variational layers)
q_delta = 0.01 # Initial spread of random quantum weights
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev, interface="torch")
def quantum_net(q_input_features, q_weights_flat):
"""
The variational quantum circuit.
"""
# Reshape weights
q_weights = q_weights_flat.reshape(q_depth, n_qubits)
# Start from state |+> , unbiased w.r.t. |0> and |1>
H_layer(n_qubits)
# Embed features in the quantum node
RY_layer(q_input_features)
# Sequence of trainable variational layers
for k in range(q_depth):
entangling_layer(n_qubits)
RY_layer(q_weights[k])
# Expectation values in the Z basis
exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)]
return tuple(exp_vals)
class Net(nn.Module):
"""
Torch module implementing the *dressed* quantum net.
"""
def __init__(self):
"""
Definition of the *dressed* layout.
"""
super().__init__()
self.q_params = nn.Parameter(q_delta * torch.randn(q_depth * n_qubits))
def forward(self, input_features):
"""
Defining how tensors are supposed to move through the *dressed* quantum
net.
"""
# obtain the input features for the quantum circuit
# by reducing the feature dimension from 512 to 4
q_in = torch.tanh(input_features) * np.pi / 2.0
# Apply the quantum circuit to each element of the batch and append to q_out
q_out = torch.Tensor(0, n_qubits)
q_out = q_out.to(device)
for elem in q_in:
q_out_elem = quantum_net(elem, self.q_params).float().unsqueeze(0)
q_out = torch.cat((q_out, q_out_elem))
# return the two-dimensional prediction from the postprocessing layer
return q_out
device = "cuda:0"
net = Net(layer_sizes).to(device)
for i in range(epoch, self.args.epochs):
for x, y in self.trainloader:
net.train()
self.optimizer.zero_grad()
print("### model is on GPU", next(self.model.parameters()).is_cuda)
out = net(x)
loss = F.cross_entropy(out, y, reduction='mean')
loss.backward() Error message:
I found that solution by looking at this: Torchlayer error when running on GPU #709 |
Ah, perfect, thanks! Let me get back to you on this. |
I am also facing this same problem when using penny lane and pytorch. |
Hey @zzh237 and @ADITYA964. It looks like while ironing out how PennyLane can fully run on GPUs is on our near-term to-do list, this will be a bigger effort. If you are keen to contribute, feel free to discuss solutions here and make a PR once we decided on a way forward! I wonder if in the meantime the fixes in the PR you mentioned could help? Sorry that I cannot do more at this stage! |
@mariaschuld understood. For those people that have problem like me and @zzh237 , downgrade the version of pennylane . pip install pennylane==0.14.1 It works perfect using this version. |
That is an important piece of information, thanks @ADITYA964! @josh also tagging you here to keep this in mind going forward. |
Hi, the below code throws errors if I run on GPU,
Error message:
If I change the code to the code like below, then it works! So could you change the code? Thanks.
The text was updated successfully, but these errors were encountered: