-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move frobenius_inner_product to qml.math #1388
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @johannesjmeyer!
I have to admit, when I suggested using qml.math
(via @co9olguy), I was meaning more simply re-write the frobenius_inner_product
using qml.math
🙂 But on re-reading, I think your approach here makes more sense.
tests/math/test_basic_math.py
Outdated
(np.eye(2), np.eye(2), False, 2.0), | ||
(np.eye(2), np.zeros((2, 2)), False, 0.0), | ||
( | ||
np.array([[1.0, 2.3], [-1.3, 2.4]]), | ||
np.array([[0.7, -7.3], [-1.0, -2.9]]), | ||
False, | ||
-21.75, | ||
), | ||
(np.eye(2), np.eye(2), True, 1.0), | ||
( | ||
np.array([[1.0, 2.3], [-1.3, 2.4]]), | ||
np.array([[0.7, -7.3], [-1.0, -2.9]]), | ||
True, | ||
-0.7381450594, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also:
- Add test cases using
pennylane.numpy
- Add test cases using TF
- Add test cases use Torch
- Add test cases using JAX :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be great if you added a way to the library to automate this, that can't be that hard, right? I.e. run the same test case with different variable types depending on which libraries are available.
Codecov Report
@@ Coverage Diff @@
## master #1388 +/- ##
=======================================
Coverage 98.21% 98.21%
=======================================
Files 184 184
Lines 13209 13212 +3
=======================================
+ Hits 12973 12976 +3
Misses 236 236
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @johannesjmeyer, just double checking the status of this PR 🙂
Looks like it is in good shape, it is just missing tests for TF, autograd, and JAX gradients?
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
I already added the gradient tests and re-requested a review before, can you have a look if the tests are alright? I'm on vacation until August 2nd. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the delay @johannesjmeyer; looks good from my end 💯
In my previous comment I think I was referring to the fact that there is a Torch gradient test, but no gradient test for JAX/TF/Autograd. However, from looking at the code again, I am satisfied that the Torch test is sufficient (the amount of processing is minimal!)
Cool, then you can merge it in :) I don't have the sufficient rights anymore. |
@josh146 worth a changelog update? (unless I missed it) |
@co9olguy good catch. It was decided to simply amend the Kernel module changelog entry in an earlier review, but this will have to be updated now that it is crossing a release. |
Should I take care of that? |
All good @johannesjmeyer, I can take this one! My fault for the PR delay 🙂 |
Context:
In the QML demo (PennyLaneAI/qml#266), it turned out that the function
frobenius_inner_product
was not differentiable because regular numpy was imported inqml.utils
.Description of the Change:
frobenius_inner_product
was moved toqml.math
. To this end, new files were introduced.Benefits:
Differentiability.
Possible Drawbacks:
Slight decrease in performance.