Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lie gradient flow optimizer #1911

Merged
merged 80 commits into from
Dec 3, 2021
Merged

Lie gradient flow optimizer #1911

merged 80 commits into from
Dec 3, 2021

Conversation

therooler
Copy link
Collaborator

@therooler therooler commented Nov 17, 2021

Context:
Implementation of Lie gradient flow on quantum circuits.

Description of the Change:
New optimizer is added

The lie gradient optimizer grows the circuit by appending unitaries corresponding to the Lie gradient to the circuit.
This is currently done via a @qml.qfunc_transform and this works reasonably well. I wrote a couple of tests to make sure that calling opt.step() works and that what is being calculated is checked against what you would get with doing everything in numpy.

One important issue that I'm having is calculating the cost after the circuit has been transformed.

For some reason, line 236 (now commented out):

cost_fn = qml.map(circuit, self.observables, device=self.circuit.device, measure='expval')([],[])

throws the following error

 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../pennylane/optimize/lie_gradient.py:258: in step
    self.step_and_cost(
../../pennylane/optimize/lie_gradient.py:236: in step_and_cost
    cost_fn = qml.map(circuit, self.observables, device=self.circuit.device, measure='expval')([],[])
../../pennylane/collections/qnode_collection.py:277: in __call__
    results = self.evaluate(args, kwargs)
../../pennylane/collections/qnode_collection.py:233: in evaluate
    results.append(q(*args, **kwargs))
../../pennylane/qnode.py:558: in __call__
    self.construct(args, kwargs)
../../pennylane/qnode.py:494: in construct
    self._qfunc_output = self.func(*args, **kwargs)
../../pennylane/collections/map.py:137: in circuit
    return MEASURE_MAP[_m](_obs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

op = tensor([], dtype=float64, requires_grad=True)

    def expval(op):
        r"""Expectation value of the supplied observable.
    
        **Example:**
    
        .. code-block:: python3
    
            dev = qml.device("default.qubit", wires=2)
    
            @qml.qnode(dev)
            def circuit(x):
                qml.RX(x, wires=0)
                qml.Hadamard(wires=1)
                qml.CNOT(wires=[0, 1])
                return qml.expval(qml.PauliY(0))
    
        Executing this QNode:
    
        >>> circuit(0.5)
        -0.4794255386042029
    
        Args:
            op (Observable): a quantum observable object
    
        Raises:
            QuantumFunctionError: `op` is not an instance of :class:`~.Observable`
        """
        if not isinstance(op, (Observable, qml.Hamiltonian)):
            raise qml.QuantumFunctionError(
>               "{} is not an observable: cannot be used with expval".format(op.name)
            )
E           AttributeError: 'tensor' object has no attribute 'name'

../../pennylane/measure.py:255: AttributeError

It has something todo with fact that products of Paulis are considered tensors and not Observables.

I'm sure there is a better way of doing what I'm doing now, so I look forward to your thoughts @josh146 .

@codecov
Copy link

codecov bot commented Nov 17, 2021

Codecov Report

Merging #1911 (7a57705) into master (fad83dc) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1911   +/-   ##
=======================================
  Coverage   98.80%   98.81%           
=======================================
  Files         225      226    +1     
  Lines       17144    17233   +89     
=======================================
+ Hits        16939    17028   +89     
  Misses        205      205           
Impacted Files Coverage Δ
pennylane/__init__.py 100.00% <100.00%> (ø)
pennylane/optimize/__init__.py 100.00% <100.00%> (ø)
pennylane/optimize/lie_algebra.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fad83dc...7a57705. Read the comment docs.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @therooler! I've gone through and left some first-pass comments and questions, mostly just for my own understanding :)

pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
Comment on lines 341 to 346
out_plus = qml.execute(
circuit_plus, self.circuit.device, gradient_fn=qml.gradients.param_shift
)
out_min = qml.execute(
circuit_min, self.circuit.device, gradient_fn=qml.gradients.param_shift
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therooler you could optimize this further by executing all circuits at once,

out = qml.execute(circuits, ...)
for out_plus, out_min in out:
    ...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this

out = qml.execute(
                circuits, self.circuit.device, gradient_fn=None
        )
        for out_plus, out_min in out:
            # depending on the length of the grouped observable, store the omegas in the array
            omegas[idx : idx + len(out_plus[0]), :] = 0.5 * (
                np.array(out_plus).T - np.array(out_min).T
            )
            idx += len(out_plus[0])
        return np.dot(self.coeffs, omegas)

throws an error:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../pennylane/optimize/lie_gradient.py:246: in step
    self.step_and_cost(
../../pennylane/optimize/lie_gradient.py:197: in step_and_cost
    omegas = self.get_omegas()
../../pennylane/optimize/lie_gradient.py:321: in get_omegas
    out = qml.execute(
../../pennylane/interfaces/batch/__init__.py:313: in execute
    with qml.tape.Unwrap(*tapes):
../../pennylane/tape/unwrap.py:82: in __enter__
    stack.enter_context(
../../../../anaconda3/envs/pennylane/lib/python3.8/contextlib.py:425: in enter_context
    result = _cm_type.__enter__(cm)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pennylane.tape.unwrap.UnwrapTape object at 0x7f7ac8ed0b20>

    def __enter__(self):
>       self._original_params = self.tape.get_parameters(trainable_only=False)
E       AttributeError: 'tuple' object has no attribute 'get_parameters'

../../pennylane/tape/unwrap.py:132: AttributeError

`

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah my mistake 🤦 This is because the qml.execute only accepts a flat list of tapes. You would need to do something like this:

c_plus, c_minus = list(zip(*circuits))

out = qml.execute(c_plus + c_minus, ...)
o_plus = out[:len(out) // 2]
o_minus = out[len(out) // 2:]

for out_plus, out_min in zip(o_plus, o_minus):
    ...

The main advantage here is that you are only calling execute once, so if you are using a cloud-based device, only one job is submitted over the API.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm running into some issues here because the returned circuit outputs have different lengths based on the Pauli groupings. Should be able to fix this...

pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
@josh146
Copy link
Member

josh146 commented Nov 19, 2021

[sc-9759]

@therooler
Copy link
Collaborator Author

I implemented the approximate and exact (non-trotterized) lie gradient. The optimizer takes a single arg to turn of exact or Trotterized evolution, and a restriction of the lie gradient to some subspace can now be achieved by passing a qml.Hamitonian object that spans this subspace. I still need to add more tests to ensure that everything is working properly, but after that it should be ready for another review I think.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therooler great improvement, the new logic looks 💯! Very clean. I left a lot of comments, but the PR is in a really good spot --- my comments mostly relate to final touch ups such as the documentation. In particular, ensuring that the usage, behaviour, advantages and restrictions of the new LieGradientOptimizer is properly motivated and explained :)

pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
tests/optimize/test_lie_gradient.py Outdated Show resolved Hide resolved
therooler and others added 7 commits November 25, 2021 08:58
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
@therooler
Copy link
Collaborator Author

Hi, I think I managed to address most of your comments:

  • I figured out a way to flatten the tapes and vectorize everything and it works. Yay!
  • I improved the docs and rendered it to make sure that things look all right (sorry for the horrible state it was in before 🙈 )
  • Added a better description of the optimizer, probably needs improvement.

Thanks for the great feedback 😄 .

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therooler this PR is now looking in great shape, and all the documentation renders very nicely! Great job.

This is essentially merge-ready, with only two things I noticed that should be done/considered:

  1. Don't forget to black -l 100 pennylane tests and ensure all the codefactor/pylint errors are resolved!

  2. It would be nice to consider and document (with an example) how a user extracts the optimized circuit structure/QNode.

doc/introduction/optimizers.rst Outdated Show resolved Hide resolved
doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_gradient.py Outdated Show resolved Hide resolved
@therooler
Copy link
Collaborator Author

I checked the docs, did the pylint and black. I think it all looks good! Oh and I changed everything to Lie algebra optimizer with Riemannian gradients.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therooler 💯 Looks very polished, will be very nice to have this in!

I'm happy to approve, this is a great feature to have in in its current form.

I only have a couple of things that might be worth considering, either here, on in a separate PR:

  • It seems to be trivial to support parametrized QNodes
  • Maybe out of scope for this PR, but I would be curious to know if this works with other autodiff frameworks such as TF or JAX. While the autodiff functionality of those frameworks is not needed, what could be cool is using their just-in-time compilation support to speed up the optimization.

doc/introduction/optimizers.rst Outdated Show resolved Hide resolved
pennylane/optimize/lie_algebra.py Outdated Show resolved Hide resolved
pennylane/optimize/lie_algebra.py Outdated Show resolved Hide resolved
Comment on lines +266 to +276
def step_and_cost(self):
r"""Update the circuit with one step of the optimizer and return the corresponding
objective function value prior to the step.

Returns:
tuple[.QNode, float]: the optimized circuit and the objective function output prior
to the step.
"""
# pylint: disable=not-callable

cost = self.circuit()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therooler this is my fault for not noticing this earlier, but:

I think it should be trivial to support parametrized QNodes by simply doing

Suggested change
def step_and_cost(self):
r"""Update the circuit with one step of the optimizer and return the corresponding
objective function value prior to the step.
Returns:
tuple[.QNode, float]: the optimized circuit and the objective function output prior
to the step.
"""
# pylint: disable=not-callable
cost = self.circuit()
def step_and_cost(self, *args, **kwargs):
r"""Update the circuit with one step of the optimizer and return the corresponding
objective function value prior to the step.
Returns:
tuple[.QNode, float]: the optimized circuit and the objective function output prior
to the step.
"""
# pylint: disable=not-callable
cost = self.circuit(*args, **kwargs)

(and similar for self.step()).

This should be a minor change, but potentially a big quality-of-life improvement for users, and bring the optimizer in line with other optimizers!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Note that this is not required for approval, as I have already approved)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me see if this breaks things.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the issue here is that I need to construct the circuit in init, otherwise things break. I haven't been able to figure out how make things work without that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh of course, thanks for exploring this!

@josh146 josh146 merged commit 23b82a4 into PennyLaneAI:master Dec 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants