Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move frobenius_inner_product to qml.math #1388

Merged
merged 22 commits into from
Aug 15, 2021

Conversation

johannesjmeyer
Copy link
Contributor

Context:
In the QML demo (PennyLaneAI/qml#266), it turned out that the function frobenius_inner_product was not differentiable because regular numpy was imported in qml.utils.

Description of the Change:
frobenius_inner_product was moved to qml.math. To this end, new files were introduced.

Benefits:
Differentiability.

Possible Drawbacks:
Slight decrease in performance.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @johannesjmeyer!

I have to admit, when I suggested using qml.math (via @co9olguy), I was meaning more simply re-write the frobenius_inner_product using qml.math 🙂 But on re-reading, I think your approach here makes more sense.

.github/CHANGELOG.md Outdated Show resolved Hide resolved
pennylane/math/classical.py Outdated Show resolved Hide resolved
pennylane/math/classical.py Outdated Show resolved Hide resolved
pennylane/math/classical.py Outdated Show resolved Hide resolved
pennylane/math/classical.py Outdated Show resolved Hide resolved
Comment on lines 26 to 39
(np.eye(2), np.eye(2), False, 2.0),
(np.eye(2), np.zeros((2, 2)), False, 0.0),
(
np.array([[1.0, 2.3], [-1.3, 2.4]]),
np.array([[0.7, -7.3], [-1.0, -2.9]]),
False,
-21.75,
),
(np.eye(2), np.eye(2), True, 1.0),
(
np.array([[1.0, 2.3], [-1.3, 2.4]]),
np.array([[0.7, -7.3], [-1.0, -2.9]]),
True,
-0.7381450594,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also:

  • Add test cases using pennylane.numpy
  • Add test cases using TF
  • Add test cases use Torch
  • Add test cases using JAX :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be great if you added a way to the library to automate this, that can't be that hard, right? I.e. run the same test case with different variable types depending on which libraries are available.

tests/math/test_basic_math.py Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Jun 16, 2021

Codecov Report

Merging #1388 (8f60231) into master (1596b9a) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1388   +/-   ##
=======================================
  Coverage   98.21%   98.21%           
=======================================
  Files         184      184           
  Lines       13209    13212    +3     
=======================================
+ Hits        12973    12976    +3     
  Misses        236      236           
Impacted Files Coverage Δ
pennylane/math/__init__.py 100.00% <ø> (ø)
pennylane/utils.py 96.31% <ø> (-0.11%) ⬇️
pennylane/kernels/cost_functions.py 100.00% <100.00%> (ø)
pennylane/math/multi_dispatch.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1596b9a...8f60231. Read the comment docs.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @johannesjmeyer, just double checking the status of this PR 🙂

Looks like it is in good shape, it is just missing tests for TF, autograd, and JAX gradients?

pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
johannesjmeyer and others added 2 commits July 19, 2021 15:02
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
@johannesjmeyer
Copy link
Contributor Author

I already added the gradient tests and re-requested a review before, can you have a look if the tests are alright? I'm on vacation until August 2nd.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the delay @johannesjmeyer; looks good from my end 💯

In my previous comment I think I was referring to the fact that there is a Torch gradient test, but no gradient test for JAX/TF/Autograd. However, from looking at the code again, I am satisfied that the Torch test is sufficient (the amount of processing is minimal!)

@johannesjmeyer
Copy link
Contributor Author

Cool, then you can merge it in :) I don't have the sufficient rights anymore.

@josh146 josh146 merged commit 0088704 into PennyLaneAI:master Aug 15, 2021
@co9olguy
Copy link
Member

@josh146 worth a changelog update? (unless I missed it)

@josh146
Copy link
Member

josh146 commented Aug 15, 2021

@co9olguy good catch. It was decided to simply amend the Kernel module changelog entry in an earlier review, but this will have to be updated now that it is crossing a release.

@johannesjmeyer
Copy link
Contributor Author

Should I take care of that?

@josh146
Copy link
Member

josh146 commented Aug 16, 2021

Should I take care of that?

All good @johannesjmeyer, I can take this one! My fault for the PR delay 🙂

josh146 added a commit that referenced this pull request Aug 16, 2021
josh146 added a commit that referenced this pull request Aug 16, 2021
* Update changelog entry for #1388

* remove
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants