Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add quantum Monte Carlo template #1130

Merged
merged 48 commits into from
Mar 17, 2021
Merged

Add quantum Monte Carlo template #1130

merged 48 commits into from
Mar 17, 2021

Conversation

trbromley
Copy link
Contributor

@trbromley trbromley commented Mar 9, 2021

Description of the Change:

Adds a template for performing quantum Monte Carlo estimation.

Possible Drawbacks:

  • Only compatible with simulators because we work with matrices. A note is added to the docstrings to explain this.
  • Assumes a 1D probability distribution. Multidimensional distributions can be input by considering the product distribution.

@codecov
Copy link

codecov bot commented Mar 9, 2021

Codecov Report

Merging #1130 (7f00f25) into master (c701627) will increase coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1130      +/-   ##
==========================================
+ Coverage   97.78%   97.79%   +0.01%     
==========================================
  Files         156      157       +1     
  Lines       11900    11964      +64     
==========================================
+ Hits        11636    11700      +64     
  Misses        264      264              
Impacted Files Coverage Δ
pennylane/templates/subroutines/__init__.py 100.00% <100.00%> (ø)
pennylane/templates/subroutines/qmc.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c701627...7f00f25. Read the comment docs.

Comment on lines 80 to 81
Consider a function defined on a set of integers :math:`X = [0, 1, \ldots, M - 1]` whose output
is bounded in the interval :math:`[0, 1]`, i.e., :math:`f: X \rightarrow [0, 1]`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The set of integers are not necessarily the values that the user wants to use in the function, correct?

For example, I may be using a normal probability distribution with mean 10 and truncating it to [-5,5]. Then these integers X = [0, 1, ... M-1] will correspond to [-5,...5] and the user has to make sure to account for this in the function. E.g.

def func(x):
return sin(x)

Must become something like:

def func(x):
return sin((10*x/(M-1)-5))

I'm wondering how we can make this clear and where to include it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! Yes that is true, hopefully the example in the usage details can make it clear one approach to dealing with this.

@trbromley trbromley changed the title [WIP] Add quantum Monte Carlo template Add quantum Monte Carlo template Mar 11, 2021
@trbromley trbromley self-assigned this Mar 11, 2021
@trbromley trbromley added the review-ready 👌 PRs which are ready for review by someone from the core team. label Mar 11, 2021
@trbromley trbromley marked this pull request as ready for review March 11, 2021 13:50
return qml.probs(estimation_wires)

phase_estimated = np.argmax(circuit()[:int(N / 2)]) / N
expectation_estimated = (1 - np.cos(np.pi * phase_estimated)) / 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not sin^2 directly? Took me a while to remember my trig identities

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is this a coincidence in how the algorithm works to estimate a phase?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a coincidence - we always need to do (1 - cos) / 2 independent of the random variable.
Could this be confusing 🤔 We could add a comment, but then I'm not sure if the comment might make it more confusing 😆

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a big deal, I think it's fine to leave as is. But an alternative is just to use a non-trigonometric function, say x**3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 The motivation for the current choice is that you don't need to worry about renormalizing your function to [0, 1]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's a good thing that the example usage reminds users that they have to renormalize? But like I said, I'm nit picking and it's fine to leave as is


```pycon
>>> expectation_estimated
0.4327096457464369
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing changelog! 🧙

unitary = np.zeros((dim, dim))

unitary[:, 0] = np.sqrt(probs)
unitary = np.linalg.qr(unitary)[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clever!

I'm a bit worried that this will be unnecessarily inefficient for large unitaries. Is there a more efficient way to populate the other rows and columns without having to perform a QR decomposition? 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question! Agree, it would be nice to make as efficient as possible.

So the motivation for using QR is:

  • It's easy to make a linearly independent set of vectors, one of them is the sqrt probs and the rest is from the identity matrix (e.g., see lines 64-66, actually I wasn't doing this properly before but it still seemed to work 🤔).
  • We then want to do the Gram-Schmidt process to orthonormalize the vectors.
  • I googled "numpy gram schmidt" and it told me to do QR 😆

So for me there's three questions:

  • Should we do Gram-Schmidt, is there more efficient?
  • Should we use QR to do Gram-Schmidt?
  • Could using QR ever result in the first column not being sqrt(p)?

Looking into it more, there is also scipy.linalg.orth. Seeing the discussion here seemed inconclusive.

Copy link
Contributor

@DSGuala DSGuala Mar 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we considering using MottonenStatePreparation here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it is not as easy as I initially guessed to build a unitary starting from just one row 🤔

I found this method after a quick search, which seems to be faster than a QR decomposition. Maybe it's overkill, but it would be nice to benchmark both methods as we scale system size.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, yes I will try this method!

pennylane/templates/subroutines/qmc.py Show resolved Hide resolved
Copy link
Contributor

@ixfoduap ixfoduap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incredibly clean and professional PR. Fantastic job as usual Tom! 🧙

My only worry is whether there are parts of the construction and checks that are not optimized and may slow down the template as we scale the number of qubits. In particular I wonder if there is a way to avoid the QR decomposition, but overall worth to minimize overheads

"""
unitary = np.zeros((2 * M, 2 * M))

for i in range(M):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check may be slow if M is large. Is there a faster way? Maybe using min and max?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if not 0 <= f <= 1 check? 🤔

Yeah it's true it could fail right at the end. We could do a pre-check before the main for loop, at the cost of additional memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this here: 8ef3ec5

pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
return qml.probs(estimation_wires)

phase_estimated = np.argmax(circuit()[:int(N / 2)]) / N
expectation_estimated = (1 - np.cos(np.pi * phase_estimated)) / 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is this a coincidence in how the algorithm works to estimate a phase?

estimated by finding the probabilities of the :math:`n` estimation wires. This in turn allows
for the estimation of :math:`\mu`.

Visit `Rebentrost et al. (2018) <https://arxiv.org/abs/1805.00109>`__ for further details. In
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to visit Patrick in Singapore 🌴

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely! We should let him know that QMC made it to PL 🎉

pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
def test_fixed_examples(self, p):
"""Test if the correct unitary is returned for fixed input examples. A correct unitary has
its first column equal to the square root of the distribution and satisfies
U @ U.T = U.T @ U = I."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the QR decomposition guaranteed to give back a real-valued unitary? I suppose so

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, since it's effectively doing Gram-Schmidt internally, if all the input vectors are real then I think the output should be real.

with pytest.raises(ValueError, match="func must be bounded within the interval"):
func_to_unitary(func, 8)

def test_example(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great test!

)
assert all(o1.wires == o2.wires for o1, o2 in zip(queue_after_qpe, qpe_tape.queue))

def test_expected_value(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! This is key

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you made checks for other examples? They shouldn't go here, but it's nice to see consistency in the correctness of the template

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DSGuala Maybe you could play with different functions and probabilities to see if the correct answers are obtained?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, working on this now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also tried for cosine squared and a rescaled cosine, but agree that a few more informal checks (also with a different probability distribution) would be a nice validation!

exact = 0.432332358381693654

# Check that the error is monotonically decreasing
for i in range(len(estimates) - 1):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does decrease monotonically, but sometimes the number is the same. I guess that's a byproduct of adding a qubit increases the accuracy only by 2 each time, so sometimes you stay in the same bin.

tests/templates/test_subroutines_qmc.py Show resolved Hide resolved
Copy link
Contributor Author

@trbromley trbromley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ixfoduap!

return qml.probs(estimation_wires)

phase_estimated = np.argmax(circuit()[:int(N / 2)]) / N
expectation_estimated = (1 - np.cos(np.pi * phase_estimated)) / 2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a coincidence - we always need to do (1 - cos) / 2 independent of the random variable.
Could this be confusing 🤔 We could add a comment, but then I'm not sure if the comment might make it more confusing 😆

exact = 0.432332358381693654

# Check that the error is monotonically decreasing
for i in range(len(estimates) - 1):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does decrease monotonically, but sometimes the number is the same. I guess that's a byproduct of adding a qubit increases the accuracy only by 2 each time, so sometimes you stay in the same bin.

pennylane/templates/subroutines/qmc.py Show resolved Hide resolved
unitary = np.zeros((dim, dim))

unitary[:, 0] = np.sqrt(probs)
unitary = np.linalg.qr(unitary)[0]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question! Agree, it would be nice to make as efficient as possible.

So the motivation for using QR is:

  • It's easy to make a linearly independent set of vectors, one of them is the sqrt probs and the rest is from the identity matrix (e.g., see lines 64-66, actually I wasn't doing this properly before but it still seemed to work 🤔).
  • We then want to do the Gram-Schmidt process to orthonormalize the vectors.
  • I googled "numpy gram schmidt" and it told me to do QR 😆

So for me there's three questions:

  • Should we do Gram-Schmidt, is there more efficient?
  • Should we use QR to do Gram-Schmidt?
  • Could using QR ever result in the first column not being sqrt(p)?

Looking into it more, there is also scipy.linalg.orth. Seeing the discussion here seemed inconclusive.

"""
unitary = np.zeros((2 * M, 2 * M))

for i in range(M):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if not 0 <= f <= 1 check? 🤔

Yeah it's true it could fail right at the end. We could do a pre-check before the main for loop, at the cost of additional memory.

estimated by finding the probabilities of the :math:`n` estimation wires. This in turn allows
for the estimation of :math:`\mu`.

Visit `Rebentrost et al. (2018) <https://arxiv.org/abs/1805.00109>`__ for further details. In
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely! We should let him know that QMC made it to PL 🎉

N = \mathcal{O}\left(\frac{1}{\epsilon^{2}}\right).

Hence, the quantum Monte Carlo algorithm has a quadratically improved time complexity with
:math:`N`.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks JM!

dim_p = len(probs)
num_target_wires = float(np.log2(2 * dim_p))

if not num_target_wires.is_integer():
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ixfoduap I think this could cause problems if users do:

m = 5
M = 2 ** m - 3  # (2 ** 5 = 32)

xmax = np.pi  # bound to region [-pi, pi]
xs = np.linspace(-xmax, xmax, M)

probs = np.array([norm().pdf(x) for x in xs])
probs /= np.sum(probs)

func = lambda i: np.sin(xs[i]) ** 2

Their function is only defined on the set of xs, which has length 29. If we pad the probability distribution to 32, then the QMC function will try to evaluate func(31), which is not defined.

I like the idea, but prefer not to do too many implicit things. For example, this is something a user could do without too much overhead.

dim_p = len(probs)
num_target_wires = float(np.log2(2 * dim_p))

if not num_target_wires.is_integer():
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DSGuala yes the x2 is for the ancilla.

def test_fixed_examples(self, p):
"""Test if the correct unitary is returned for fixed input examples. A correct unitary has
its first column equal to the square root of the distribution and satisfies
U @ U.T = U.T @ U = I."""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, since it's effectively doing Gram-Schmidt internally, if all the input vectors are real then I think the output should be real.

@trbromley trbromley requested a review from ixfoduap March 17, 2021 11:35
Copy link
Contributor

@ixfoduap ixfoduap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm glad the faster approach for constructing the prob_to_unitary is working well. Just left a few suggestions on changes to the docstrings

pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qmc.py Outdated Show resolved Hide resolved
trbromley and others added 3 commits March 17, 2021 13:19
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
@trbromley
Copy link
Contributor Author

Thanks @ixfoduap!

@trbromley trbromley merged commit 50df173 into master Mar 17, 2021
@trbromley trbromley deleted the add_qmc_1d branch March 17, 2021 17:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants