Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First-order finite-diff differentiation with float32 parameters #1381

Merged
merged 5 commits into from
Jun 2, 2021

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Jun 1, 2021

Fixes #1375

Currently, float32 parameters, like those default in the JAX interface, might experience floating-point errors with diff_method='finite-diff' and order=1. JacobianTape.numeric_pd converts all parameters to float64, but the calculation of "y0" does not have such a conversion.

Downside:
We assume that the tape parameters are all scalers. If matrix gate parameters, like with qml.Hermitian or qml.QubitUnitary, are not set to untrainable, then this fix will give errors. We should have a better method to prevent trainable parameters from ever being non-scalers than just discouraging it.

@albi3ro albi3ro requested a review from josh146 June 1, 2021 17:18
@codecov
Copy link

codecov bot commented Jun 1, 2021

Codecov Report

Merging #1381 (c2aa9b6) into master (aba87f1) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1381   +/-   ##
=======================================
  Coverage   98.16%   98.16%           
=======================================
  Files         154      154           
  Lines       11558    11559    +1     
=======================================
+ Hits        11346    11347    +1     
  Misses        212      212           
Impacted Files Coverage Δ
pennylane/tape/jacobian_tape.py 98.01% <100.00%> (+<0.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update aba87f1...c2aa9b6. Read the comment docs.

@albi3ro albi3ro added the review-ready 👌 PRs which are ready for review by someone from the core team. label Jun 1, 2021
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @albi3ro! PR looks good from my end. Only suggestion: I recommend adding a test that passes under this fix, but would have failed previously, just to ensure this remains fixed in the future.

We should have a better method to prevent trainable parameters from ever being non-scalers than just discouraging it.

Agree - I think the correct approach here is requires_grad=False by default, as it requires users to deliberately turn on differentiation. However, this is a big change.

@@ -956,3 +956,31 @@ def circuit(a):
res = circuit(0.8, shots=2)
assert len(res) == 2
assert dev.shots == 3


def test_finitediff_float32(tol):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was unsure of where to put this test. It's not actually an interface specific thing, just a float32 thing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I don't know why StronglyEntanglingLayers has a problem but simpler circuits don't.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was unsure of where to put this test. It's not actually an interface specific thing, just a float32 thing.

It looks fine here :) Alternatively, it could also be placed in test_jacobian_tape (does a file like that exist?).

Also, I don't know why StronglyEntanglingLayers has a problem but simpler circuits don't.

Weird 🤔

Thanks @albi3ro! So just to confirm, this test fails on master?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test fails on master.

I tried writing it in terms of tapes, but then you have to expand StronglyEntanglingLayers and update parameters and set them trainable, so I just went the simpler route of testing at QNode level.

@albi3ro albi3ro requested a review from josh146 June 2, 2021 14:59
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test looks great, thanks @albi3ro!

@@ -956,3 +956,31 @@ def circuit(a):
res = circuit(0.8, shots=2)
assert len(res) == 2
assert dev.shots == 3


def test_finitediff_float32(tol):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was unsure of where to put this test. It's not actually an interface specific thing, just a float32 thing.

It looks fine here :) Alternatively, it could also be placed in test_jacobian_tape (does a file like that exist?).

Also, I don't know why StronglyEntanglingLayers has a problem but simpler circuits don't.

Weird 🤔

Thanks @albi3ro! So just to confirm, this test fails on master?

@albi3ro albi3ro merged commit 7b25e3e into master Jun 2, 2021
@albi3ro albi3ro deleted the finite_diff_float32 branch June 2, 2021 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

First-order finite-diff differentiation with float32 parameters results in floating-point errors
2 participants