Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix] Error in documentation's numpy differentiation code example #1499

Merged
merged 11 commits into from
Aug 5, 2021

Conversation

mariaschuld
Copy link
Contributor

@mariaschuld mariaschuld commented Aug 5, 2021

Context:

Updates a failing code example.

Description of the Change:

Use keyword argument to mark a non-differentiable wires argument.
Add a random seed to make example reproducable.

Updated example reads:

Screenshot from 2021-08-05 12-34-00

Related GitHub Issues:
#1498

@mariaschuld mariaschuld changed the title Fix for Issue #1498: Error in documentation's numpy differentiation code example [BugFix] Error in documentation's numpy differentiation code example Aug 5, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Aug 5, 2021

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@@ -163,20 +163,22 @@ and two non-trainable arguments ``data`` and ``wires``:
qml.CNOT(wires=[wires[0], wires[2]])
return qml.expval(qml.PauliZ(wires[0]))

We must specify that ``data`` and ``wires`` are NumPy arrays with ``requires_grad=False``:
Since ``wires`` is a keyword argument, it will be automatically non-trainable. However, we
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josh146 will this also be true after we give up the backwards compatibility hack?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope! In fact, this isn't even a PL thing, but an autograd thing - it is autograd that is ignoring keyword arguments 😆 So maybe best to reword this?

@codecov
Copy link

codecov bot commented Aug 5, 2021

Codecov Report

Merging #1499 (f943690) into master (7db893f) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1499   +/-   ##
=======================================
  Coverage   98.36%   98.36%           
=======================================
  Files         182      182           
  Lines       12933    12933           
=======================================
  Hits        12722    12722           
  Misses        211      211           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7db893f...f943690. Read the comment docs.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for getting to this so quickly @mariaschuld! I think we also need to solve the underlying issue (the backwards compatibility 'everything in a positional argument is trainable') ASAP

doc/introduction/interfaces/numpy.rst Outdated Show resolved Hide resolved
@@ -163,20 +163,22 @@ and two non-trainable arguments ``data`` and ``wires``:
qml.CNOT(wires=[wires[0], wires[2]])
return qml.expval(qml.PauliZ(wires[0]))

We must specify that ``data`` and ``wires`` are NumPy arrays with ``requires_grad=False``:
Since ``wires`` is a keyword argument, it will be automatically non-trainable. However, we
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope! In fact, this isn't even a PL thing, but an autograd thing - it is autograd that is ignoring keyword arguments 😆 So maybe best to reword this?

>>> circuit(weights, data, wires)
0.16935626052294817
>>> wires = [2, 0, 1]
>>> circuit(weights, data, wires=wires)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I am aware, this is the only code change necessary to make it work, the change above (wires=None) should not be needed

Comment on lines 176 to 177
When we compute the derivative, arguments with ``requires_grad=False``, as well as keyword arguments,
are ignored by :func:`~.grad`:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

I think this is all that needs to be said?

@@ -122,12 +122,12 @@ Differentiable and non-differentiable arguments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

How does PennyLane know which arguments of a quantum function to differentiate, and which to ignore?
For example, you may want to pass arguments as positional arguments to a QNode but *not* have
For example, you may want to pass arguments to a QNode but *not* have
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to polish the entire paragraph, so it makes sense together. I think here as positional arguments is confusing because we only state the rule about positional arguments below.

PennyLane consider them when computing gradients.

As a basic rule, **all positional arguments provided to the QNode are assumed to be differentiable
by default**. To accomplish this, all arrays created by the PennyLane NumPy module have a special
flag ``requires_grad`` specifying whether they are trainable or not:
by default**. To accomplish this, arguments are internally turned into arrays of the PennyLane NumPy module,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this piece of information is super important!

@mariaschuld
Copy link
Contributor Author

@josh146 I updated the entire section to be clearer. We will only need to make one or two text adjustments when the upcoming change happens, but the examples should work either way.

Do you think this is better?

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mariaschuld, reads 💯 from my end!

@mariaschuld mariaschuld merged commit d9383f6 into master Aug 5, 2021
@mariaschuld mariaschuld deleted the fix_1498 branch August 5, 2021 12:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants