You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My expectation would be to raise an error at this point, since I believe the backprop diff_method only makes sense when operating in analytic mode.
Actual behavior
We fall into this branch in the logic and return successfully.
However, any evaluations to the QNode are doomed to failure. When using the autograd interface, we get the traceback below. When using the TF interface, the gradient just evaluates to None.
Additional information
I'm not sure if this is a bug or expected behaviour. It also is probably not particularly pressing, since most use-cases will have diff_method="best" and fall into this chain of the logic, which will raise an error if shots is finite and result in choosing the param-shift rule.
This may also be fixed as we merge default.qubit with its backprop-compatible subclasses.
Source code
## With autograd
import pennylane as qml
from pennylane import numpy as np
dev = qml.device("default.qubit.autograd", wires=1, shots=1000)
@qml.qnode(dev, interface="autograd", diff_method="backprop")
def f(x):
qml.RX(x, wires=0)
return qml.expval(qml.PauliZ(0))
x = np.array(0.4)
qml.grad(f)(x)
## With TF
import pennylane as qml
import tensorflow as tf
dev = qml.device("default.qubit.tf", wires=1, shots=1000)
@qml.qnode(dev, interface="tf", diff_method="backprop")
def f(x):
qml.RX(x, wires=0)
return qml.expval(qml.PauliZ(0))
x = tf.Variable(tf.ones(1))
with tf.GradientTape() as tape:
out = f(x)
tape.jacobian(out, x)
Tracebacks
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: float() argument must be a string or a number, not 'ArrayBox'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
<ipython-input-28-c68476946da3>in<module>
10
11 x = np.array(0.4)
---> 12 qml.grad(f)(x)
~/Documents/Coding/pennylane/pennylane/_grad.py in __call__(self, *args, **kwargs)
99 """Evaluates the gradient function, and saves the function value 100 calculated during the forward pass in :attr:`.forward`."""
--> 101 grad_value, ans = self._get_grad_fn(args)(*args, **kwargs)
102 self._forward = ans
103 return grad_value
~/miniconda3/envs/pennylane/lib/python3.7/site-packages/autograd/wrap_util.py in nary_f(*args, **kwargs)
18 else:
19 x = tuple(args[i] foriin argnum)
---> 20 return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
21 return nary_f
22 return nary_operator
~/Documents/Coding/pennylane/pennylane/_grad.py in _grad_with_forward(fun, x)
116 difference being that it returns both the gradient *and* the forward pass
117 value."""--> 118 vjp, ans = _make_vjp(fun, x) 119 120 if not vspace(ans).size == 1:~/miniconda3/envs/pennylane/lib/python3.7/site-packages/autograd/core.py in make_vjp(fun, x) 8 def make_vjp(fun, x): 9 start_node = VJPNode.new_root()---> 10 end_value, end_node = trace(start_node, fun, x) 11 if end_node is None: 12 def vjp(g): return vspace(x).zeros()~/miniconda3/envs/pennylane/lib/python3.7/site-packages/autograd/tracer.py in trace(start_node, fun, x) 8 with trace_stack.new_trace() as t: 9 start_box = new_box(x, t, start_node)---> 10 end_box = fun(start_box) 11 if isbox(end_box) and end_box._trace == start_box._trace: 12 return end_box._value, end_box._node~/miniconda3/envs/pennylane/lib/python3.7/site-packages/autograd/wrap_util.py in unary_f(x) 13 else: 14 subargs = subvals(args, zip(argnum, x))---> 15 return fun(*subargs, **kwargs) 16 if isinstance(argnum, int): 17 x = args[argnum]~/Documents/Coding/pennylane/pennylane/qnode.py in __call__(self, *args, **kwargs) 607 608 # execute the tape--> 609 res = self.qtape.execute(device=self.device) 610 611 # if shots was changed~/Documents/Coding/pennylane/pennylane/tape/tape.py in execute(self, device, params) 1321 params = self.get_parameters() 1322 -> 1323 return self._execute(params, device=device) 1324 1325 def execute_device(self, params, device):~/Documents/Coding/pennylane/pennylane/tape/tape.py in execute_device(self, params, device) 1352 1353 if isinstance(device, qml.QubitDevice):-> 1354 res = device.execute(self) 1355 else: 1356 res = device.execute(self.operations, self.observables, {})~/Documents/Coding/pennylane/pennylane/_qubit_device.py in execute(self, circuit, **kwargs) 196 # generate computational basis samples 197 if self.shots is not None or circuit.is_sampled:--> 198 self._samples = self.generate_samples() 199 200 multiple_sampled_jobs = circuit.is_sampled and self._has_partitioned_shots()~/Documents/Coding/pennylane/pennylane/_qubit_device.py in generate_samples(self) 465 rotated_prob = self.analytic_probability() 466 --> 467 samples = self.sample_basis_states(number_of_states, rotated_prob) 468 return QubitDevice.states_to_binary(samples, self.num_wires) 469 ~/Documents/Coding/pennylane/pennylane/_qubit_device.py in sample_basis_states(self, number_of_states, state_probability) 492 493 basis_states = np.arange(number_of_states)--> 494 return np.random.choice(basis_states, shots, p=state_probability) 495 496 @staticmethodmtrand.pyx in numpy.random.mtrand.RandomState.choice()ValueError: setting an array element with a sequence.
System information
Dev PL
I have searched exisisting GitHub issues to make sure the issue does not already exist.
The text was updated successfully, but these errors were encountered:
Expected behavior
Suppose we load a backprop-compatible device with a finite number of shots specified:
We then bind that device to a QNode with the backprop diff_method:
My expectation would be to raise an error at this point, since I believe the backprop diff_method only makes sense when operating in analytic mode.
Actual behavior
We fall into this branch in the logic and return successfully.
However, any evaluations to the QNode are doomed to failure. When using the autograd interface, we get the traceback below. When using the TF interface, the gradient just evaluates to
None
.Additional information
I'm not sure if this is a bug or expected behaviour. It also is probably not particularly pressing, since most use-cases will have
diff_method="best"
and fall into this chain of the logic, which will raise an error if shots is finite and result in choosing the param-shift rule.This may also be fixed as we merge
default.qubit
with its backprop-compatible subclasses.Source code
Tracebacks
System information
The text was updated successfully, but these errors were encountered: