Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TTN template #2043

Merged
merged 19 commits into from
Jan 4, 2022
Merged

Add TTN template #2043

merged 19 commits into from
Jan 4, 2022

Conversation

DSGuala
Copy link
Contributor

@DSGuala DSGuala commented Dec 17, 2021

Context:
Adding PennyLane functionality to facilitate the investigation of tensor network based quantum circuits.

Description of the Change:
Adds qml.templates.tensornetworks.ttn.
This is a template that creates quantum circuits based on the shape of tree tensor networks.
The tree tensor network template accepts user defined circuits for the individual tensor blocks in the circuit.

Benefits:
Allows users to easily build tree tensor network inspired quantum circuits.

Possible Drawbacks:
N/A

Related GitHub Issues:
N/A

@DSGuala DSGuala changed the title [WIP] Add TTN template Add TTN template Dec 21, 2021
@DSGuala DSGuala marked this pull request as ready for review December 21, 2021 15:42
@codecov
Copy link

codecov bot commented Dec 21, 2021

Codecov Report

Merging #2043 (3ed79f2) into master (a26d136) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #2043   +/-   ##
=======================================
  Coverage   99.17%   99.18%           
=======================================
  Files         225      226    +1     
  Lines       17291    17352   +61     
=======================================
+ Hits        17149    17210   +61     
  Misses        142      142           
Impacted Files Coverage Δ
pennylane/templates/tensornetworks/__init__.py 100.00% <100.00%> (ø)
pennylane/templates/tensornetworks/ttn.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a26d136...3ed79f2. Read the comment docs.

The result is similar to the architecture in `arXiv:1803.11537 <https://arxiv.org/abs/1803.11537>`_.

The argument ``block`` is a user-defined quantum circuit.``block`` should have two arguments: ``weights`` and ``wires``.
For clarity, it is recommended to use a one-dimensional list or array for the block weights.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have used fixed signatures a few times before in PennyLane, and always regretted it :)

I don't know a better option, but one could at least allow weights (or better parameters) to be a list of inputs, which gets unpacked as block(*parameters, wires=...). That's how templates.broadcast does it...

But it would be great to have a better solution one day!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Maria! I made this change. I'm happy that it makes block more flexible.
I'm now getting some warnings about ragged arrays. I think somewhere deeper in PennyLane the parameters are converted to numpy arrays 🤔

Copy link
Contributor

@mariaschuld mariaschuld Dec 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is very strange, because PL should only ever produce lists of tensors/arrays internally, or flattened versions. I think it would be good to trace where this error arises, just in case it is something trivial. Ideally one should never create a ragged array, they will be increasingly discontinued...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found that this happens with qml.broadcast() as well:

import pennylane as qml

def mytemplate(par1, par2, wires):
    qml.RY(par1[0], wires=wires[0])
    qml.RY(par1[1], wires=wires[0])
    qml.RY(par2,wires=wires[1])

dev = qml.device('default.qubit', wires=4)

@qml.qnode(dev)
def circuit(pars):
    qml.broadcast(unitary=mytemplate, pattern="chain", wires=[0,1,2,3], parameters=pars)
    return qml.expval(qml.PauliZ(0))

print(qml.grad(circuit)([[[1.0,1.0],2.0],[[1.0,1.0],2.0],[[1.0,1.0],2.0]]))

Error: array is not broadcastable to correct shape

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I traced the ragged array warning to the construct() method of qnode.py :

def construct(self, args, kwargs):
        """Call the quantum function with a tape context, ensuring the operations get queued."""

        if self.interface == "autograd":
            # HOTFIX: to maintain backwards compatibility existing PennyLane code and demos, here we treat
            # all inputs that do not explicitly specify `requires_grad=False`
            # as trainable. This should be removed at some point, forcing users
            # to specify `requires_grad=True` for trainable parameters.
            args = [
                qml.numpy.array(a, requires_grad=True) if not hasattr(a, "requires_grad") else a
                for a in args
            ]
...

Here args is a qml.numpy.array made from args. args only has one element,a. This a is the ragged array supplied by a user.

I think it may be okay to leave things like this and put a warning in the documentation about the use of ragged arrays to define the template parameters 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DSGuala luckily we are planning to remove that HOTFIX if statement within days/weeks :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antalszava tagging you here for visibility


self.template_weights = template_weights

super().__init__(template_weights, wires=wires, do_queue=do_queue, id=id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator refactor uses a new structure, where hyperparams like block are stored in a dictionary called self.hyperpameters, so that the set op.name, op.wires, op.parameters, op.hyperparameters fully describes an op (in the sense that if these are equal, two ops can be considered equal.

Don't worry about it for now, we'll adapt the template (unless you want to give it a go)! But things will be a bit easier then.


super().__init__(template_weights, wires=wires, do_queue=do_queue, id=id)

def expand(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the same refactor we start defining decompositions instead of expand, but I'll take care of this!

return qml.expval(qml.PauliZ(wires=wires[-1]))

manual_result = circuit()
assert np.isclose(template_result, manual_result)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: do you guarantee that the template is trainable wrt the block params? I think it should be, but for that you'd have to add diffability tests...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a differentiability test. This seems to work fine except for ragged arrays. I suppose this is like the warnings in the other comment and related to how PennyLane handles the parameters 🤔

Copy link
Contributor

@mariaschuld mariaschuld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only minor comments for now, looks good. We will redesign it a bit in the op refactor!

qml.TTN(range(n_wires),n_block_wires,block, n_params_block, template_weights)
return qml.expval(qml.PauliZ(wires=n_wires-1))

>>> print(qml.draw(circuit,expansion_strategy='device')(template_weights))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DSGuala if you set grad_method=None below, this should register that this operation has no explicit gradient recipe, and should automatically be decomposed. This should mean that expansion_strategy='device' is not needed here, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this change but still seem to require expansion_strategy='device' 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh 🤔 Oh! this is probably because the QNode is in backprop mode; I think removing the expansion_strategy would only work in parameter-shift mode.

DSGuala and others added 2 commits December 22, 2021 08:49
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Copy link
Contributor

@ixfoduap ixfoduap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking really good!

@mariaschuld Has made great suggestions, specially regarding differentiability tests. Please address those.

Other than that, my only major suggestion is to update the image to better showcase the tree structure of the template

pennylane/templates/tensornetworks/ttn.py Outdated Show resolved Hide resolved
pennylane/templates/tensornetworks/ttn.py Show resolved Hide resolved
pennylane/templates/tensornetworks/ttn.py Show resolved Hide resolved
doc/introduction/templates.rst Show resolved Hide resolved
pennylane/templates/tensornetworks/ttn.py Show resolved Hide resolved
doc/introduction/templates.rst Outdated Show resolved Hide resolved
doc/introduction/templates.rst Outdated Show resolved Hide resolved
DSGuala and others added 5 commits December 22, 2021 11:25
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
Co-authored-by: ixfoduap <40441298+ixfoduap@users.noreply.github.com>
Add differentiability test
Copy link
Contributor

@ixfoduap ixfoduap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good from my side, but let's wait for an approval from @mariaschuld before merging

pennylane/templates/tensornetworks/ttn.py Show resolved Hide resolved
pennylane/templates/tensornetworks/ttn.py Show resolved Hide resolved
Copy link
Contributor

@mariaschuld mariaschuld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't check every little detail, but looks good overall!

@DSGuala DSGuala merged commit b449980 into master Jan 4, 2022
@DSGuala DSGuala deleted the add_TTN branch January 4, 2022 14:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants