Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds default.qubit.autograd qubit simulator for use with the PassthruQNode #721

Merged
merged 30 commits into from Jul 30, 2020

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Jul 27, 2020

Description of the Change:

  • Adds a module pennylane/plugins/autograd_ops.py, which defines the parametrized operations using autograd. This mirrors pennylane/plugins/tf_ops.py.

  • Adds a new default.qubit.autograd device, which inherits from default.qubit and makes the following changes:

    • Uses autograd.numpy for operations and linear algebra
    • Redefines several static methods (including asarray and reduce_sum) to allow them to work with Autograd (Autograd has several gotchas/requirements on top of vanilla NumPy).
  • You can now create a default.qubit.autograd device, and, when used with the autograd interface, a PassthruQNode will be returned, allowing for faster backprop:

    import pennylane as qml
    from pennylane import numpy as np
    
    dev = qml.device("default.qubit.autograd", wires=1)
    @qml.qnode(dev, interface="autograd", diff_method="backprop")
    def circuit(x):
        qml.RX(x[1], wires=0)
        qml.Rot(x[0], x[1], x[2], wires=0)
        return qml.expval(qml.PauliZ(0))
    
    weights = np.array([0.2, 0.5, 0.1], requires_grad=True)
    grad_fn = qml.grad(circuit)
    grad_fn(circuit)
  • The device state, accessed via dev.state, is now also differentiable:

    dev = qml.device("default.qubit.autograd", wires=1)
    
    @qml.qnode(dev, diff_method="backprop", interface="autograd")
    def circuit(a):
        qml.RY(a, wires=0)
        return qml.expval(qml.PauliZ(0))
    
    a = np.array(0.54, requires_grad=True)
    
    def cost(a):
        """A function of the device quantum state, as a function
        of input QNode parameters."""
        circuit(a)
        res = np.abs(dev.state) ** 2
        return res[1] - res[0]
    
    grad = qml.grad(cost)(a)

    Note that, since autograd is fully functional, dev.state must be wrapped in a function call.

  • Rather than repeating the same old integration tests for the new device, I simply added the line

    $(PYTHON) $(PLUGIN_TESTRUNNER) --device=default.qubit.autograd

    to the makefile (it works brilliantly @mariaschuld!)

Benefits:

  • Provides users with the speedup of backprop, without requiring TensorFlow be installed.

Possible Drawbacks:

  • The forward pass may be slightly slower than default.qubit, as several vectorized NumPy functions we rely on aren't supported by Autograd, so had to be re-implemented as for-loops.

  • Currently, evaluation and gradients work when analytic=True. When analytic=False, evaluation continues to work, but attempting to differentiate the result will raise an Autograd error, as the np.random() module is not supported.

    • Question: do we want to restrict the default.qubit.autograd device to simply support analytic=True? This is easy to do (simply overwrite __init__) however you lose the ability to evaluate noisy expectations.
  • When Autograd performs the backwards pass, it passes a symbolic ArrayBox object through the computation. However, the templates currently perform input validation on the parameters to ensure that they are iterable --- this will then fail, as the ArrayBox is not an iterable. Simply removing or updating this check from qml.broadcast will allow this device to work with the templates library.

  • Currently, you have to explicitly create a default.qubit.autograd device. Simply creating default.qubit will load the default NumPy simulator which requires the parameter shift rule.

    • Question: should we have device("default.qubit", interface="autograd") automatically load default.qubit.autograd? Similarly, should we have device("default.qubit", interface="tf") automatically load default.qubit.tf?

      I would fully support this, if it wasn't for the fact that neither of the backprop devices support non-analytic mode.

Related GitHub Issues: n/a

@codecov
Copy link

codecov bot commented Jul 27, 2020

Codecov Report

Merging #721 into master will decrease coverage by 3.27%.
The diff coverage is 77.77%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #721      +/-   ##
==========================================
- Coverage   98.74%   95.46%   -3.28%     
==========================================
  Files         101      107       +6     
  Lines        6354     6791     +437     
==========================================
+ Hits         6274     6483     +209     
- Misses         80      308     +228     
Impacted Files Coverage Δ
pennylane/collections/qnode_collection.py 100.00% <ø> (+3.57%) ⬆️
pennylane/numpy/tensor.py 89.65% <ø> (-3.21%) ⬇️
pennylane/plugins/__init__.py 100.00% <ø> (ø)
pennylane/plugins/default_qubit_tf.py 82.60% <0.00%> (-3.76%) ⬇️
pennylane/__init__.py 63.75% <7.14%> (ø)
pennylane/plugins/default_qubit_autograd.py 88.88% <88.88%> (ø)
pennylane/plugins/autograd_ops.py 97.29% <97.29%> (ø)
pennylane/_qubit_device.py 98.60% <100.00%> (-0.69%) ⬇️
pennylane/vqe/vqe.py 78.26% <0.00%> (-21.74%) ⬇️
pennylane/_queuing.py 80.00% <0.00%> (-20.00%) ⬇️
... and 32 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f7eccce...a2ff021. Read the comment docs.

@josh146 josh146 added devices 💻 Device or plugin API related review-ready 👌 PRs which are ready for review by someone from the core team. labels Jul 27, 2020

coverage:
@echo "Generating coverage report..."
$(PYTHON) $(TESTRUNNER) $(COVERAGE)
$(PYTHON) $(PLUGIN_TESTRUNNER) --device=default.qubit.autograd $(COVERAGE) --cov-append
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️
We should begin deleting all the repeated device integration tests, and just add lines to the makefile here

@co9olguy
Copy link
Member

  * Question: do we want to restrict the `default.qubit.autograd` device to simply support `autograd=True`? This is easy to do (simply overwrite `__init__`) _however_ you lose the ability to _evaluate_ noisy expectations.

@josh146 do you mean analytic=True?

@co9olguy
Copy link
Member

  * Question: should we have `device("default.qubit", interface="autograd")` _automatically load_ `default.qubit.autograd`? Similarly, should we have  `device("default.qubit", interface="tf")` _automatically load_ `default.qubit.tf`?
    I would fully support this, if it wasn't for the fact that neither of the backprop devices support non-analytic mode.

For device("default.qubit", diff_method="backprop") it definitely makes sense

@josh146
Copy link
Member Author

josh146 commented Jul 27, 2020

@josh146 do you mean analytic=True?

Yep, oops.

For device("default.qubit", diff_method="backprop") it definitely makes sense

I agree. What about if diff_method is not specified? Default to parameter shift? It's a shame, because defaulting to backprop would lead to a substantial performance gain under the hood.

Alternatively, to support users that want to train a model, on simulator, with finite shots, the decorator could do this easily, we just have to take into account the device, the diff_method, and the analytic method before returning the correct QNode. So:

  • qml.device("default.qubit", wires=2): return default.qubit.autograd, PassthruQNode

  • qml.device("default.qubit", wires=2, interface="tf"): return default.qubit.tf, PassthruQNode

  • qml.device("default.qubit", wires=2, analytic=False, shots=100): return default.qubit, QubitQNode

  • qml.device("default.qubit", wires=2, interface="torch"): return default.qubit, QubitQNode

@co9olguy
Copy link
Member

I agree. What about if diff_method is not specified? Default to parameter shift? It's a shame, because defaulting to backprop would lead to a substantial performance gain under the hood.

In the case that something is not specified, it's best to default to the most sensible thing. I think backprop is the most sensible thing for simulators (that could support it). Parameter shift is most sensible for HW

Copy link
Member

@co9olguy co9olguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @josh146! Amazing how little needs to be added to make it work.

I have some concerns with the _reduce_sum method, I think should be more carefully implemented and tested

pennylane/plugins/autograd_ops.py Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Outdated Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Show resolved Hide resolved
pennylane/plugins/tests/test_measurements.py Show resolved Hide resolved
tests/plugins/test_default_qubit_autograd.py Outdated Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Outdated Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Outdated Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Outdated Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Outdated Show resolved Hide resolved
pennylane/plugins/default_qubit_autograd.py Outdated Show resolved Hide resolved
@josh146
Copy link
Member Author

josh146 commented Jul 28, 2020

Thanks @co9olguy, have made all suggested changes!

In particular, I noticed that np.sum(array, axis=a) provides identical results as np.apply_over_axes(np.sum, array, axes) when axis is passed as a tuple (not a list). This was surprising, since normally axis corresponds to an int argument, and axes corresponds to a list/tuple argument in NumPy 🤔

In any case, this behaviour of np.sum is supported by Autograd, so I dropped my custom reduce_sum implementation!

@co9olguy
Copy link
Member

@josh146 Don't forget to update the changelog! 😀

try:
from pennylane import numpy as np

except ImportError as e:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

decided to do away with it in the end?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't get codecov to work, it's been driving me mad all day! So this is an attempt for codecov to give me the tick 😝

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for codecov problems!
I was waiting ages for a commit to master so that I could update #716 and get the tick!

@josh146 josh146 merged commit 9f75818 into master Jul 30, 2020
@josh146 josh146 deleted the passthru-qubit branch July 30, 2020 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devices 💻 Device or plugin API related review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants