Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vectorize and remove for loop in sparse expvals #1596

Merged
merged 19 commits into from
Aug 29, 2021
Merged

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Aug 27, 2021

Context:

In #1551, support for computing the expectation values of Hamiltonians using sparse methods was directly added to default.qubit. However, two improvements were noticed:

  • The for loop over the Hamiltonian observables and coefficients could be replaced with vectorization

  • When not in backprop mode, the full sparse matrix could be computed directly.

Description of the Change:

  • The for loop discussed above has been removed and replaced with vectorization.

  • When non-backprop mode is detected, we simply re-use the same logic that is currently used for qml.SparseHamiltonian.

Benefits:

  • Vectorizing the for loop allows us to remove the casting of Hamiltonian coefficients to NumPy arrays, that we were performing as a bugfix.

  • qml.SparseHamiltonian and qml.Hamiltonian now share the same code for expvals when using parameter-shift.

  • Faster execution as opposed to backprop in master

    Backprop:
      Using ExpvalCost  : 0.9697349071502686     (1x)
      expval(H), master : 0.2827181816101074   (3.4x)
      expval(H), this PR: 0.08908772468566895 (10.9x)
    
  • Faster execution as opposed to parameter-shift in master

    Parameter-shift:
      Using ExpvalCost  : 6.549182176589966    (1x)
      expval(H), master : 2.6744325160980225 (2.4x)
      expval(H), this PR: 1.9294066429138184 (3.4x)
    

Possible Drawbacks:

  • While improvements to the parameter-shift pipeline in default.qubit are nice, we plan to transition to lightning.qubit by default, and likely restrict default.qubit to purely backprop going forward.

  • Is the SparseHamiltonian class deprecated? Should we remove it?

Related GitHub Issues: n/a

@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

* coo.data
* qml.math.gather(self.state, coo.col)
)
c = qml.math.cast(qml.math.convert_like(coeff, product), "complex128")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am such an idiot, when I tried to add this yesterday I must have converted to the wrong object - and the rewrite is indeed that simple.

Strange that the qchem tests fail!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be an unrelated issue, something to do with h5py 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I just saw...Feel free to tag me for a quick merge once things pass!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

sigh 😆

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like big fun!

Comment on lines -500 to -501
# todo: remove this hack that avoids errors when attempting to multiply
# a nontrainable qml.tensor to a trainable Arraybox
if isinstance(coeff, qml.numpy.tensor) and not coeff.requires_grad:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mariaschuld this is really weird, but the rewrite allows this to be removed 🤔

@josh146 josh146 added the review-ready 👌 PRs which are ready for review by someone from the core team. label Aug 27, 2021
for op, coeff in zip(observable.ops, observable.data):

# extract a scipy.sparse.coo_matrix representation of this Pauli word
coo = qml.operation.Tensor(op).sparse_matrix(wires=self.wires)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soranjh sorry, I just realised I never addressed your suggestion to rename. Do you have a better idea than coo?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that coo might be confused with the coo sparse format. Maybe consider using mat or something similar, but it is not important at all.

if observable.name == "Hamiltonian":
Hmat = qml.utils.sparse_hamiltonian(observable, wires=self.wires)
elif observable.name == "SparseHamiltonian":
Hmat = observable.matrix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@@ -150,7 +150,7 @@ def sparse_hamiltonian(H, wires=None):
n = len(wires)
matrix = scipy.sparse.coo_matrix((2 ** n, 2 ** n), dtype="complex128")

coeffs = qml.math.toarray(H.coeffs)
coeffs = qml.math.toarray(H.data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point!

Copy link
Contributor

@mariaschuld mariaschuld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, thanks so much @josh146 . So we are back to about 15x speedup now? :)

(Approved, assuming the qchem issue gets fixed - but that is on master, right?)

@josh146
Copy link
Member Author

josh146 commented Aug 27, 2021

(Approved, assuming the qchem issue gets fixed - but that is on master, right?)

Yep, finally fixed this in #1597 🎉


backprop_mode = not isinstance(self.state, np.ndarray)

if backprop_mode:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is backprop the only method we can support with the new addition?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still support parameter-shift with qml.expval(H), however we use the else: statement, which is more performant

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I meant if backprop is the only method that works with the new procedure that Maria added which computes the sparse matrix for each term in the Hamiltonian.

Copy link
Member Author

@josh146 josh146 Aug 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep 👍 Parameter-shift defaults to computing the full sparse matrix first, and then only a single expval

@codecov
Copy link

codecov bot commented Aug 27, 2021

Codecov Report

Merging #1596 (5b08e68) into master (65320c9) will increase coverage by 2.16%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1596      +/-   ##
==========================================
+ Coverage   96.97%   99.13%   +2.16%     
==========================================
  Files         195      195              
  Lines       14103    14104       +1     
==========================================
+ Hits        13676    13982     +306     
+ Misses        427      122     -305     
Impacted Files Coverage Δ
pennylane/devices/default_qubit.py 100.00% <100.00%> (ø)
pennylane/utils.py 98.25% <100.00%> (ø)
pennylane/beta/devices/default_tensor.py 96.93% <0.00%> (+1.70%) ⬆️
pennylane/interfaces/batch/tensorflow.py 100.00% <0.00%> (+2.17%) ⬆️
pennylane/devices/default_qubit_tf.py 92.00% <0.00%> (+2.66%) ⬆️
pennylane/beta/devices/default_tensor_tf.py 90.62% <0.00%> (+3.12%) ⬆️
pennylane/interfaces/batch/torch.py 100.00% <0.00%> (+3.27%) ⬆️
pennylane/interfaces/torch.py 100.00% <0.00%> (+3.29%) ⬆️
pennylane/qnode.py 98.47% <0.00%> (+3.43%) ⬆️
pennylane/collections/qnode_collection.py 100.00% <0.00%> (+3.50%) ⬆️
... and 12 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 65320c9...5b08e68. Read the comment docs.

Copy link
Contributor

@soranjh soranjh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Josh, looks good to me. I am not sure that the SparseHamiltonian class is needed anymore.

Copy link
Contributor

@ixfoduap ixfoduap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love it!

A beautiful example of great teamwork! I think the four of us contributed in different ways to enable this great UI and high performance 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants