Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Differentiation does not work with dynamic_one_shot #5736

Open
1 task done
mudit2812 opened this issue May 23, 2024 · 0 comments · May be fixed by #5861
Open
1 task done

[BUG] Differentiation does not work with dynamic_one_shot #5736

mudit2812 opened this issue May 23, 2024 · 0 comments · May be fixed by #5861
Assignees
Labels
bug 🐛 Something isn't working

Comments

@mudit2812
Copy link
Contributor

Expected behavior

I expect to be able to differentiate arbitrary circuits when using the dynamic_one_shot transform.

Actual behavior

I get an error.

Additional information

Originally from this forum discussion

Source code

import pennylane as qml
import torch

dev = qml.device("default.qubit", shots=10)

@qml.qnode(dev, interface='torch') # switch to torch interface
def f(x):
    qml.RX(x, 0) # remove extraneous instructions
    return qml.expval(qml.measure(0)) # REPLACE PauliX with measure

x = torch.tensor(0.4, requires_grad=True) # switch to torch tensor
result = f(x) 
result.backward() # replace with torch gradient computation
x.grad

Tracebacks

/usr/local/lib/python3.10/dist-packages/autoray/autoray.py:81: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  return func(*args, **kwargs)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-4-879285d05025> in <cell line: 13>()
     11 x = torch.tensor(0.4, requires_grad=True) # switch to torch tensor
     12 result = f(x)
---> 13 result.backward() # replace with torch gradient computation
     14 x.grad

1 frames
/usr/local/lib/python3.10/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
    520                 inputs=inputs,
    521             )
--> 522         torch.autograd.backward(
    523             self, gradient, retain_graph, create_graph, inputs=inputs
    524         )

/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    264     # some Python versions print out the first line of a multi-line function
    265     # calls in the traceback and some print out the last line
--> 266     Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    267         tensors,
    268         grad_tensors_,

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

System information

Dev. Using Pennylane branch dos-interfaces

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant