-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add quantum Monte Carlo template #1130
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1130 +/- ##
==========================================
+ Coverage 97.78% 97.79% +0.01%
==========================================
Files 156 157 +1
Lines 11900 11964 +64
==========================================
+ Hits 11636 11700 +64
Misses 264 264
Continue to review full report at Codecov.
|
Consider a function defined on a set of integers :math:`X = [0, 1, \ldots, M - 1]` whose output | ||
is bounded in the interval :math:`[0, 1]`, i.e., :math:`f: X \rightarrow [0, 1]`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The set of integers are not necessarily the values that the user wants to use in the function, correct?
For example, I may be using a normal probability distribution with mean 10 and truncating it to [-5,5]. Then these integers X = [0, 1, ... M-1] will correspond to [-5,...5] and the user has to make sure to account for this in the function. E.g.
def func(x):
return sin(x)
Must become something like:
def func(x):
return sin((10*x/(M-1)-5))
I'm wondering how we can make this clear and where to include it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Yes that is true, hopefully the example in the usage details can make it clear one approach to dealing with this.
return qml.probs(estimation_wires) | ||
|
||
phase_estimated = np.argmax(circuit()[:int(N / 2)]) / N | ||
expectation_estimated = (1 - np.cos(np.pi * phase_estimated)) / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not sin^2 directly? Took me a while to remember my trig identities
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or is this a coincidence in how the algorithm works to estimate a phase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a coincidence - we always need to do (1 - cos) / 2 independent of the random variable.
Could this be confusing 🤔 We could add a comment, but then I'm not sure if the comment might make it more confusing 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a big deal, I think it's fine to leave as is. But an alternative is just to use a non-trigonometric function, say x**3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 The motivation for the current choice is that you don't need to worry about renormalizing your function to [0, 1]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's a good thing that the example usage reminds users that they have to renormalize? But like I said, I'm nit picking and it's fine to leave as is
|
||
```pycon | ||
>>> expectation_estimated | ||
0.4327096457464369 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing changelog! 🧙
unitary = np.zeros((dim, dim)) | ||
|
||
unitary[:, 0] = np.sqrt(probs) | ||
unitary = np.linalg.qr(unitary)[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clever!
I'm a bit worried that this will be unnecessarily inefficient for large unitaries. Is there a more efficient way to populate the other rows and columns without having to perform a QR decomposition? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question! Agree, it would be nice to make as efficient as possible.
So the motivation for using QR is:
- It's easy to make a linearly independent set of vectors, one of them is the sqrt probs and the rest is from the identity matrix (e.g., see lines 64-66, actually I wasn't doing this properly before but it still seemed to work 🤔).
- We then want to do the Gram-Schmidt process to orthonormalize the vectors.
- I googled "numpy gram schmidt" and it told me to do QR 😆
So for me there's three questions:
- Should we do Gram-Schmidt, is there more efficient?
- Should we use QR to do Gram-Schmidt?
- Could using QR ever result in the first column not being sqrt(p)?
Looking into it more, there is also scipy.linalg.orth. Seeing the discussion here seemed inconclusive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we considering using MottonenStatePreparation here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it is not as easy as I initially guessed to build a unitary starting from just one row 🤔
I found this method after a quick search, which seems to be faster than a QR decomposition. Maybe it's overkill, but it would be nice to benchmark both methods as we scale system size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, yes I will try this method!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incredibly clean and professional PR. Fantastic job as usual Tom! 🧙
My only worry is whether there are parts of the construction and checks that are not optimized and may slow down the template as we scale the number of qubits. In particular I wonder if there is a way to avoid the QR decomposition, but overall worth to minimize overheads
""" | ||
unitary = np.zeros((2 * M, 2 * M)) | ||
|
||
for i in range(M): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check may be slow if M
is large. Is there a faster way? Maybe using min
and max
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if not 0 <= f <= 1
check? 🤔
Yeah it's true it could fail right at the end. We could do a pre-check before the main for
loop, at the cost of additional memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this here: 8ef3ec5
return qml.probs(estimation_wires) | ||
|
||
phase_estimated = np.argmax(circuit()[:int(N / 2)]) / N | ||
expectation_estimated = (1 - np.cos(np.pi * phase_estimated)) / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or is this a coincidence in how the algorithm works to estimate a phase?
estimated by finding the probabilities of the :math:`n` estimation wires. This in turn allows | ||
for the estimation of :math:`\mu`. | ||
|
||
Visit `Rebentrost et al. (2018) <https://arxiv.org/abs/1805.00109>`__ for further details. In |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to visit Patrick in Singapore 🌴
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely! We should let him know that QMC made it to PL 🎉
def test_fixed_examples(self, p): | ||
"""Test if the correct unitary is returned for fixed input examples. A correct unitary has | ||
its first column equal to the square root of the distribution and satisfies | ||
U @ U.T = U.T @ U = I.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the QR decomposition guaranteed to give back a real-valued unitary? I suppose so
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, since it's effectively doing Gram-Schmidt internally, if all the input vectors are real then I think the output should be real.
with pytest.raises(ValueError, match="func must be bounded within the interval"): | ||
func_to_unitary(func, 8) | ||
|
||
def test_example(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great test!
) | ||
assert all(o1.wires == o2.wires for o1, o2 in zip(queue_after_qpe, qpe_tape.queue)) | ||
|
||
def test_expected_value(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! This is key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you made checks for other examples? They shouldn't go here, but it's nice to see consistency in the correctness of the template
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DSGuala Maybe you could play with different functions and probabilities to see if the correct answers are obtained?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, working on this now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also tried for cosine squared and a rescaled cosine, but agree that a few more informal checks (also with a different probability distribution) would be a nice validation!
exact = 0.432332358381693654 | ||
|
||
# Check that the error is monotonically decreasing | ||
for i in range(len(estimates) - 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does decrease monotonically, but sometimes the number is the same. I guess that's a byproduct of adding a qubit increases the accuracy only by 2 each time, so sometimes you stay in the same bin.
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ixfoduap!
return qml.probs(estimation_wires) | ||
|
||
phase_estimated = np.argmax(circuit()[:int(N / 2)]) / N | ||
expectation_estimated = (1 - np.cos(np.pi * phase_estimated)) / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a coincidence - we always need to do (1 - cos) / 2 independent of the random variable.
Could this be confusing 🤔 We could add a comment, but then I'm not sure if the comment might make it more confusing 😆
exact = 0.432332358381693654 | ||
|
||
# Check that the error is monotonically decreasing | ||
for i in range(len(estimates) - 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does decrease monotonically, but sometimes the number is the same. I guess that's a byproduct of adding a qubit increases the accuracy only by 2 each time, so sometimes you stay in the same bin.
unitary = np.zeros((dim, dim)) | ||
|
||
unitary[:, 0] = np.sqrt(probs) | ||
unitary = np.linalg.qr(unitary)[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question! Agree, it would be nice to make as efficient as possible.
So the motivation for using QR is:
- It's easy to make a linearly independent set of vectors, one of them is the sqrt probs and the rest is from the identity matrix (e.g., see lines 64-66, actually I wasn't doing this properly before but it still seemed to work 🤔).
- We then want to do the Gram-Schmidt process to orthonormalize the vectors.
- I googled "numpy gram schmidt" and it told me to do QR 😆
So for me there's three questions:
- Should we do Gram-Schmidt, is there more efficient?
- Should we use QR to do Gram-Schmidt?
- Could using QR ever result in the first column not being sqrt(p)?
Looking into it more, there is also scipy.linalg.orth. Seeing the discussion here seemed inconclusive.
""" | ||
unitary = np.zeros((2 * M, 2 * M)) | ||
|
||
for i in range(M): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if not 0 <= f <= 1
check? 🤔
Yeah it's true it could fail right at the end. We could do a pre-check before the main for
loop, at the cost of additional memory.
estimated by finding the probabilities of the :math:`n` estimation wires. This in turn allows | ||
for the estimation of :math:`\mu`. | ||
|
||
Visit `Rebentrost et al. (2018) <https://arxiv.org/abs/1805.00109>`__ for further details. In |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely! We should let him know that QMC made it to PL 🎉
N = \mathcal{O}\left(\frac{1}{\epsilon^{2}}\right). | ||
|
||
Hence, the quantum Monte Carlo algorithm has a quadratically improved time complexity with | ||
:math:`N`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks JM!
dim_p = len(probs) | ||
num_target_wires = float(np.log2(2 * dim_p)) | ||
|
||
if not num_target_wires.is_integer(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ixfoduap I think this could cause problems if users do:
m = 5
M = 2 ** m - 3 # (2 ** 5 = 32)
xmax = np.pi # bound to region [-pi, pi]
xs = np.linspace(-xmax, xmax, M)
probs = np.array([norm().pdf(x) for x in xs])
probs /= np.sum(probs)
func = lambda i: np.sin(xs[i]) ** 2
Their function is only defined on the set of xs
, which has length 29
. If we pad the probability distribution to 32
, then the QMC function will try to evaluate func(31)
, which is not defined.
I like the idea, but prefer not to do too many implicit things. For example, this is something a user could do without too much overhead.
dim_p = len(probs) | ||
num_target_wires = float(np.log2(2 * dim_p)) | ||
|
||
if not num_target_wires.is_integer(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DSGuala yes the x2 is for the ancilla.
def test_fixed_examples(self, p): | ||
"""Test if the correct unitary is returned for fixed input examples. A correct unitary has | ||
its first column equal to the square root of the distribution and satisfies | ||
U @ U.T = U.T @ U = I.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, since it's effectively doing Gram-Schmidt internally, if all the input vectors are real then I think the output should be real.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm glad the faster approach for constructing the prob_to_unitary is working well. Just left a few suggestions on changes to the docstrings
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
Thanks @ixfoduap! |
Description of the Change:
Adds a template for performing quantum Monte Carlo estimation.
Possible Drawbacks: