New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds default.qubit.autograd qubit simulator for use with the PassthruQNode #721
Conversation
Codecov Report
@@ Coverage Diff @@
## master #721 +/- ##
==========================================
- Coverage 98.74% 95.46% -3.28%
==========================================
Files 101 107 +6
Lines 6354 6791 +437
==========================================
+ Hits 6274 6483 +209
- Misses 80 308 +228
Continue to review full report at Codecov.
|
|
||
coverage: | ||
@echo "Generating coverage report..." | ||
$(PYTHON) $(TESTRUNNER) $(COVERAGE) | ||
$(PYTHON) $(PLUGIN_TESTRUNNER) --device=default.qubit.autograd $(COVERAGE) --cov-append |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
We should begin deleting all the repeated device integration tests, and just add lines to the makefile here
@josh146 do you mean |
For |
Yep, oops.
I agree. What about if Alternatively, to support users that want to train a model, on simulator, with finite shots, the decorator could do this easily, we just have to take into account the device, the
|
In the case that something is not specified, it's best to default to the most sensible thing. I think backprop is the most sensible thing for simulators (that could support it). Parameter shift is most sensible for HW |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @josh146! Amazing how little needs to be added to make it work.
I have some concerns with the _reduce_sum
method, I think should be more carefully implemented and tested
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Thanks @co9olguy, have made all suggested changes! In particular, I noticed that In any case, this behaviour of |
@josh146 Don't forget to update the changelog! 😀 |
try: | ||
from pennylane import numpy as np | ||
|
||
except ImportError as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
decided to do away with it in the end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't get codecov to work, it's been driving me mad all day! So this is an attempt for codecov to give me the tick 😝
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for codecov problems!
I was waiting ages for a commit to master so that I could update #716 and get the tick!
Description of the Change:
Adds a module
pennylane/plugins/autograd_ops.py
, which defines the parametrized operations using autograd. This mirrorspennylane/plugins/tf_ops.py
.Adds a new
default.qubit.autograd
device, which inherits fromdefault.qubit
and makes the following changes:autograd.numpy
for operations and linear algebraasarray
andreduce_sum
) to allow them to work with Autograd (Autograd has several gotchas/requirements on top of vanilla NumPy).You can now create a
default.qubit.autograd
device, and, when used with theautograd
interface, aPassthruQNode
will be returned, allowing for faster backprop:The device state, accessed via
dev.state
, is now also differentiable:Note that, since autograd is fully functional,
dev.state
must be wrapped in a function call.Rather than repeating the same old integration tests for the new device, I simply added the line
to the makefile (it works brilliantly @mariaschuld!)
Benefits:
Possible Drawbacks:
The forward pass may be slightly slower than
default.qubit
, as several vectorized NumPy functions we rely on aren't supported by Autograd, so had to be re-implemented as for-loops.Currently, evaluation and gradients work when
analytic=True
. Whenanalytic=False
, evaluation continues to work, but attempting to differentiate the result will raise an Autograd error, as thenp.random()
module is not supported.default.qubit.autograd
device to simply supportanalytic=True
? This is easy to do (simply overwrite__init__
) however you lose the ability to evaluate noisy expectations.When Autograd performs the backwards pass, it passes a symbolic
ArrayBox
object through the computation. However, the templates currently perform input validation on the parameters to ensure that they are iterable --- this will then fail, as theArrayBox
is not an iterable. Simply removing or updating this check fromqml.broadcast
will allow this device to work with the templates library.Currently, you have to explicitly create a
default.qubit.autograd
device. Simply creatingdefault.qubit
will load the default NumPy simulator which requires the parameter shift rule.Question: should we have
device("default.qubit", interface="autograd")
automatically loaddefault.qubit.autograd
? Similarly, should we havedevice("default.qubit", interface="tf")
automatically loaddefault.qubit.tf
?I would fully support this, if it wasn't for the fact that neither of the backprop devices support non-analytic mode.
Related GitHub Issues: n/a