-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Circuit cutting: add interface integration tests #2231
Conversation
[sc-14724] |
Codecov Report
@@ Coverage Diff @@
## master #2231 +/- ##
=======================================
Coverage 99.26% 99.26%
=======================================
Files 231 231
Lines 18349 18349
=======================================
Hits 18215 18215
Misses 134 134 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the interface tests! gradients are looking promising 😁
|
||
res = cut_circuit(x) | ||
res_expected = circuit(x) | ||
assert np.isclose(res.detach().numpy(), res_expected.detach().numpy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the detach method being used here?
Why is it needed for comparisons?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you do assert np.isclose(res, res_expected)
then PyTorch gives the error:
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
Could possibly also use torch.isclose.
return qml.expval(qml.PauliZ(wires=[0])) | ||
|
||
x = torch.tensor(0.531, requires_grad=True) | ||
cut_circuit = qcut.cut_circuit(circuit, use_opt_einsum=use_opt_einsum) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is qcut.cut_circuit
preferred over qml.transforms.cut_circuit
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(flyby comment: why not make it top-level, @qml.cut_circuit
?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is qcut.cut_circuit preferred over qml.transforms.cut_circuit?
For sure in user-facing documentation we don't want to go with qcut
and should instead go through qml
.
(flyby comment: why not make it top-level, @qml.cut_circuit?)
Yes good idea, could do! What is the criteria for qml.func()
vs qml.transforms.func()
? It looks like it's grown a bit organically so far with both choices used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the criteria for qml.func() vs qml.transforms.func()? It looks like it's grown a bit organically so far with both choices used.
There is no criteria at the moment, it is done depending on UI considerations, and 'how important' a particular transform is!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @anthayes92 @josh146!
|
||
res = cut_circuit(x) | ||
res_expected = circuit(x) | ||
assert np.isclose(res.detach().numpy(), res_expected.detach().numpy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you do assert np.isclose(res, res_expected)
then PyTorch gives the error:
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
Could possibly also use torch.isclose.
return qml.expval(qml.PauliZ(wires=[0])) | ||
|
||
x = torch.tensor(0.531, requires_grad=True) | ||
cut_circuit = qcut.cut_circuit(circuit, use_opt_einsum=use_opt_einsum) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is qcut.cut_circuit preferred over qml.transforms.cut_circuit?
For sure in user-facing documentation we don't want to go with qcut
and should instead go through qml
.
(flyby comment: why not make it top-level, @qml.cut_circuit?)
Yes good idea, could do! What is the criteria for qml.func()
vs qml.transforms.func()
? It looks like it's grown a bit organically so far with both choices used.
Context:
Adds tests for the
cut_circuit
transform that ensures differentiability with all of the interfaces.