-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TTN template #2043
Add TTN template #2043
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2043 +/- ##
=======================================
Coverage 99.17% 99.18%
=======================================
Files 225 226 +1
Lines 17291 17352 +61
=======================================
+ Hits 17149 17210 +61
Misses 142 142
Continue to review full report at Codecov.
|
The result is similar to the architecture in `arXiv:1803.11537 <https://arxiv.org/abs/1803.11537>`_. | ||
|
||
The argument ``block`` is a user-defined quantum circuit.``block`` should have two arguments: ``weights`` and ``wires``. | ||
For clarity, it is recommended to use a one-dimensional list or array for the block weights. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have used fixed signatures a few times before in PennyLane, and always regretted it :)
I don't know a better option, but one could at least allow weights
(or better parameters
) to be a list of inputs, which gets unpacked as block(*parameters, wires=...)
. That's how templates.broadcast
does it...
But it would be great to have a better solution one day!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Maria! I made this change. I'm happy that it makes block
more flexible.
I'm now getting some warnings about ragged arrays. I think somewhere deeper in PennyLane the parameters are converted to numpy arrays 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is very strange, because PL should only ever produce lists of tensors/arrays internally, or flattened versions. I think it would be good to trace where this error arises, just in case it is something trivial. Ideally one should never create a ragged array, they will be increasingly discontinued...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found that this happens with qml.broadcast()
as well:
import pennylane as qml
def mytemplate(par1, par2, wires):
qml.RY(par1[0], wires=wires[0])
qml.RY(par1[1], wires=wires[0])
qml.RY(par2,wires=wires[1])
dev = qml.device('default.qubit', wires=4)
@qml.qnode(dev)
def circuit(pars):
qml.broadcast(unitary=mytemplate, pattern="chain", wires=[0,1,2,3], parameters=pars)
return qml.expval(qml.PauliZ(0))
print(qml.grad(circuit)([[[1.0,1.0],2.0],[[1.0,1.0],2.0],[[1.0,1.0],2.0]]))
Error: array is not broadcastable to correct shape
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I traced the ragged array warning to the construct()
method of qnode.py
:
def construct(self, args, kwargs):
"""Call the quantum function with a tape context, ensuring the operations get queued."""
if self.interface == "autograd":
# HOTFIX: to maintain backwards compatibility existing PennyLane code and demos, here we treat
# all inputs that do not explicitly specify `requires_grad=False`
# as trainable. This should be removed at some point, forcing users
# to specify `requires_grad=True` for trainable parameters.
args = [
qml.numpy.array(a, requires_grad=True) if not hasattr(a, "requires_grad") else a
for a in args
]
...
Here args is a qml.numpy.array made from args
. args
only has one element,a
. This a
is the ragged array supplied by a user.
I think it may be okay to leave things like this and put a warning in the documentation about the use of ragged arrays to define the template parameters 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DSGuala luckily we are planning to remove that HOTFIX if statement within days/weeks :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@antalszava tagging you here for visibility
|
||
self.template_weights = template_weights | ||
|
||
super().__init__(template_weights, wires=wires, do_queue=do_queue, id=id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The operator refactor uses a new structure, where hyperparams like block
are stored in a dictionary called self.hyperpameters
, so that the set op.name, op.wires, op.parameters, op.hyperparameters
fully describes an op (in the sense that if these are equal, two ops can be considered equal.
Don't worry about it for now, we'll adapt the template (unless you want to give it a go)! But things will be a bit easier then.
|
||
super().__init__(template_weights, wires=wires, do_queue=do_queue, id=id) | ||
|
||
def expand(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the same refactor we start defining decompositions instead of expand, but I'll take care of this!
return qml.expval(qml.PauliZ(wires=wires[-1])) | ||
|
||
manual_result = circuit() | ||
assert np.isclose(template_result, manual_result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: do you guarantee that the template is trainable wrt the block params? I think it should be, but for that you'd have to add diffability tests...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a differentiability test. This seems to work fine except for ragged arrays. I suppose this is like the warnings in the other comment and related to how PennyLane handles the parameters 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only minor comments for now, looks good. We will redesign it a bit in the op refactor!
qml.TTN(range(n_wires),n_block_wires,block, n_params_block, template_weights) | ||
return qml.expval(qml.PauliZ(wires=n_wires-1)) | ||
|
||
>>> print(qml.draw(circuit,expansion_strategy='device')(template_weights)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DSGuala if you set grad_method=None
below, this should register that this operation has no explicit gradient recipe, and should automatically be decomposed. This should mean that expansion_strategy='device'
is not needed here, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this change but still seem to require expansion_strategy='device'
🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh 🤔 Oh! this is probably because the QNode is in backprop mode; I think removing the expansion_strategy
would only work in parameter-shift mode.
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking really good!
@mariaschuld Has made great suggestions, specially regarding differentiability tests. Please address those.
Other than that, my only major suggestion is to update the image to better showcase the tree structure of the template
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
Add differentiability test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from my side, but let's wait for an approval from @mariaschuld before merging
Remove ragged tests, warn user
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't check every little detail, but looks good overall!
Context:
Adding PennyLane functionality to facilitate the investigation of tensor network based quantum circuits.
Description of the Change:
Adds
qml.templates.tensornetworks.ttn
.This is a template that creates quantum circuits based on the shape of tree tensor networks.
The tree tensor network template accepts user defined circuits for the individual tensor blocks in the circuit.
Benefits:
Allows users to easily build tree tensor network inspired quantum circuits.
Possible Drawbacks:
N/A
Related GitHub Issues:
N/A