-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TAPE] Add support for tape execution #796
Conversation
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com> Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great!
Is there a link to the conversation? I'm curious what the advantages are. |
Codecov Report
@@ Coverage Diff @@
## master #796 +/- ##
=======================================
Coverage 92.30% 92.30%
=======================================
Files 121 121
Lines 7715 7737 +22
=======================================
+ Hits 7121 7142 +21
- Misses 594 595 +1
Continue to review full report at Codecov.
|
It was on Slack unfortunately! We didn't explore in too much detail. More so that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @josh146, looks great
|
||
except (AttributeError, TypeError): | ||
# unable to determine the output dimension | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
codecov seems to have a legitimate complaint here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woah! This in-line commenting by codecov is new
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! 💯 Nice @josh146 😊
# execution methods | ||
# ======================================================== | ||
|
||
def execute(self, device, params=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering about the naming here: previously one had QNode.evaluate
. Could be good to have evaluate
here too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to admit, I'm a bit personally biased towards execute
over evaluate
. Partially because I find execute
is more descriptive in that it will be actually calling the quantum device for the result.
Having a different term here will also make the final find-all/replace-all easier when we replace core!
But happy to change it back if everyone agrees
else: | ||
res = device.execute(self.operations, self.observables, {}) | ||
|
||
# Update output dim if incorrect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come that in certain cases the output dim needs to be updated? Might be worth noting here perhaps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The output dimension of the QNode needs to be known in order for the Jacobian to be created properly. Currently, the QNode computes the output dimension when processing the queue:
- If a measurement process that returns a scalar is included, the output dim is incremented by
1
- If probability is included, the output dim is incremented by
2**len(wires)
- If a CV probability is included, the output dim is incremented by
cutoff ** len(wires)
.
However, this is quite inelegent. It means that devices aren't free to define their own output without needing to always make the QNode aware of what the output dimension is (example: when we added CV probability, we needed to update the output dim in the QNode).
Instead, what we can do is add another approach; the tape simply looks out the output size after execution, without needing to know anything in advance!
This has a drawback though --- you always need to evaluate the QNode before computing the Jacobian.
assert tape.output_dim == sum([2, 2]) | ||
|
||
def test_incorrect_ragged_output_dim_estimate(self): | ||
"""Test that a quantum tape with an incorrect *ragged* output dimension |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not completely sure about the distinction between this test case and the previous one. Perhaps might be worth parametrizing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference is in the tape measurements 🙂 The previous test had output [probs(wires=0), probs(wires=1)]
which will output a NumPy array with shape (2,2)
.
This test instead has a tape with output [probs(wires=0), probs(wires=[1, 2])]
, which will be a ragged NumPy object array that looks like [[a, b], [c, d, e, f]]
. So we are dealing with a completely different dtype
to the previous case.
Since we are testing different logic here, I don't think this should be parametrized (parametrization should be restricted to testing the same logic with different data).
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Hey @trbromley ...this came from the idea of batching: if you want to send a list of tapes, you could do dev.execute([tape1, tape2,...]) But my latest favourite solution for batching would be to design a tape so that it can represent a batch of tapes itself, in which case |
Context: Second PR in the tape series, following on from #792
Description of the Change:
Adds support for tape execution on devices. In particular, it adds the following methods:
tape.execute(device, params=None)
: the user-facing call for tape execution on a given device. This is always the method users should use; if an interface is being used, this method will be wrapped to support autodifferentiation.Note that parameters need not be passed; if not passed, the current tape parameters will be used for execution.
tape.execute_device(device, params)
: a low level method for executing the tape on a device. This is where the main logic takes place. When using an interface, this will be behind the interface; calling this method will not support autodifferentiation.tape._execute
: an optional method (simply an alias totape.execute_device
). The interface may override this method if it needs to insert logic inbetween the two execute calls. It has the same signature asexecute_device
.Note: the three methods, as well as their signatures, and constant getting and setting of tape parameters might seem overly complex. I arrived at this working solution after getting all three interfaces to work. In particular:
Autograd requires that the method it wraps accepts differentiable numeric parameters as a positional argument. So it must have
params
in the signature, and it cannot havedevice
be a positional argument!In TensorFlow and PyTorch, however, you want the opposite; the tape has already been constructed with tensors; you simply want to execute it as-is, and avoid needing to 're-submit' the trainable tensors to the
execute
method.Benefits: Tape is now executable using all devices
Possible Drawbacks:
@mariaschuld pointed out that there are some potential advantages to instead having
rather than
be the user-facing call. However, this will require significant changes:
The interface will need to be applied to the device, not the tape
Similar interface requirements as to the
device.execute
method signature would need to be consideredWe are in a position where the interface jacobian call is in a different location in the code base to the wrapped execute method!
I think this will still be doable, but I propose we explore this after the tape refactor, as it will definitely require interface + device API modifications. We may even be able to support both approaches.
Related GitHub Issues: n/a