-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorize and remove for loop in sparse expvals #1596
Conversation
Hello. You may have forgotten to update the changelog!
|
pennylane/devices/default_qubit.py
Outdated
* coo.data | ||
* qml.math.gather(self.state, coo.col) | ||
) | ||
c = qml.math.cast(qml.math.convert_like(coeff, product), "complex128") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am such an idiot, when I tried to add this yesterday I must have converted to the wrong object - and the rewrite is indeed that simple.
Strange that the qchem tests fail!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to be an unrelated issue, something to do with h5py 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I just saw...Feel free to tag me for a quick merge once things pass!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like big fun!
# todo: remove this hack that avoids errors when attempting to multiply | ||
# a nontrainable qml.tensor to a trainable Arraybox | ||
if isinstance(coeff, qml.numpy.tensor) and not coeff.requires_grad: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mariaschuld this is really weird, but the rewrite allows this to be removed 🤔
for op, coeff in zip(observable.ops, observable.data): | ||
|
||
# extract a scipy.sparse.coo_matrix representation of this Pauli word | ||
coo = qml.operation.Tensor(op).sparse_matrix(wires=self.wires) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soranjh sorry, I just realised I never addressed your suggestion to rename. Do you have a better idea than coo
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that coo
might be confused with the coo sparse format. Maybe consider using mat or something similar, but it is not important at all.
if observable.name == "Hamiltonian": | ||
Hmat = qml.utils.sparse_hamiltonian(observable, wires=self.wires) | ||
elif observable.name == "SparseHamiltonian": | ||
Hmat = observable.matrix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
@@ -150,7 +150,7 @@ def sparse_hamiltonian(H, wires=None): | |||
n = len(wires) | |||
matrix = scipy.sparse.coo_matrix((2 ** n, 2 ** n), dtype="complex128") | |||
|
|||
coeffs = qml.math.toarray(H.coeffs) | |||
coeffs = qml.math.toarray(H.data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, thanks so much @josh146 . So we are back to about 15x speedup now? :)
(Approved, assuming the qchem issue gets fixed - but that is on master, right?)
Yep, finally fixed this in #1597 🎉 |
|
||
backprop_mode = not isinstance(self.state, np.ndarray) | ||
|
||
if backprop_mode: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is backprop the only method we can support with the new addition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still support parameter-shift
with qml.expval(H)
, however we use the else:
statement, which is more performant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I meant if backprop is the only method that works with the new procedure that Maria added which computes the sparse matrix for each term in the Hamiltonian.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep 👍 Parameter-shift defaults to computing the full sparse matrix first, and then only a single expval
Codecov Report
@@ Coverage Diff @@
## master #1596 +/- ##
==========================================
+ Coverage 96.97% 99.13% +2.16%
==========================================
Files 195 195
Lines 14103 14104 +1
==========================================
+ Hits 13676 13982 +306
+ Misses 427 122 -305
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Josh, looks good to me. I am not sure that the SparseHamiltonian
class is needed anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love it!
A beautiful example of great teamwork! I think the four of us contributed in different ways to enable this great UI and high performance 🚀
Context:
In #1551, support for computing the expectation values of Hamiltonians using sparse methods was directly added to
default.qubit
. However, two improvements were noticed:The
for
loop over the Hamiltonian observables and coefficients could be replaced with vectorizationWhen not in backprop mode, the full sparse matrix could be computed directly.
Description of the Change:
The
for
loop discussed above has been removed and replaced with vectorization.When non-backprop mode is detected, we simply re-use the same logic that is currently used for
qml.SparseHamiltonian
.Benefits:
Vectorizing the for loop allows us to remove the casting of Hamiltonian coefficients to NumPy arrays, that we were performing as a bugfix.
qml.SparseHamiltonian
andqml.Hamiltonian
now share the same code for expvals when using parameter-shift.Faster execution as opposed to backprop in master
Faster execution as opposed to parameter-shift in master
Possible Drawbacks:
While improvements to the parameter-shift pipeline in
default.qubit
are nice, we plan to transition tolightning.qubit
by default, and likely restrictdefault.qubit
to purely backprop going forward.Is the
SparseHamiltonian
class deprecated? Should we remove it?Related GitHub Issues: n/a