Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TmpPauliRot still decomposes if the theta of zero is trainable #4585

Merged
merged 9 commits into from
Sep 19, 2023

Conversation

timmysilv
Copy link
Contributor

@timmysilv timmysilv commented Sep 8, 2023

Context:
While porting DQ2, the integration tests failed. It took me waaaaay long to find out why, but it's because DQL kept the TmpPauliRot ops around despite their value originally being 0, while DQ2 did away with them, and that broke differentiation. I figure this was supposed to be an optimization, but it's breaking assumptions made by DQ2.

The break is because DQL reaches jax's trainable-parameter evaluation code before filtering out the zeros, thereby enabling differentiation over those zeros, while DQ2 reaches that code after filtering them out (so it has no trainable parameters and doesn't differentiate over anything). The filtering logic assumed DQL's execution order, and the slight re-arrangement of when we decompose unsupported ops broke this with DQ2.

Description of the Change:
TmpPauliRot.compute_decomposition does not become nothing if the theta value is trainable.

Benefits:
SpecialUnitary differentiation works with DQ2

Possible Drawbacks:
I feel like this still might hurt performance, but this is a consequence of SpecialUnitary-implementation-meets-execution-pipeline assumptions. Fixing that requires a deeper change, and it should probably happen in SpecialUnitary's code.

@codecov
Copy link

codecov bot commented Sep 8, 2023

Codecov Report

Patch coverage: 100.00% and no project coverage change.

Comparison is base (fae8f1c) 99.64% compared to head (0dea8e0) 99.64%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #4585   +/-   ##
=======================================
  Coverage   99.64%   99.64%           
=======================================
  Files         375      375           
  Lines       33358    33358           
=======================================
  Hits        33239    33239           
  Misses        119      119           
Files Changed Coverage Δ
pennylane/ops/qubit/special_unitary.py 100.00% <100.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@timmysilv
Copy link
Contributor Author

[sc-44387]

Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change seems technically fine but I don't have enough knowledge about SU to know where it has an impact, I let @dwierichs or @Qottmann to accept it first.

@Qottmann
Copy link
Contributor

Simply removing it is a bit problematic. That is because without this custom decomposition the pulse gradients always get 4**n-2 unnecessary, empty rotation gates. This slows down simulation and generates additional noise on real device execution (if not transpiled away, which currently is not the case AFAIK)

@timmysilv timmysilv changed the title remove custom decomposition for TmpPauliRot TmpPauliRot still decomposes if the theta of zero is trainable Sep 15, 2023
@timmysilv
Copy link
Contributor Author

I have updated the changes and PR description. I'm not convinced this is really better, but it's clearer. Lmk thoughts

@rmoyard rmoyard self-requested a review September 15, 2023 16:32
Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good to me 👍

Copy link
Contributor

@Qottmann Qottmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm 👍

@timmysilv timmysilv enabled auto-merge (squash) September 19, 2023 14:04
@timmysilv timmysilv merged commit 69475a2 into master Sep 19, 2023
39 checks passed
@timmysilv timmysilv deleted the special-unitary-works-dq2 branch September 19, 2023 15:02
mudit2812 pushed a commit that referenced this pull request Sep 19, 2023
**Context:**
While porting DQ2, the integration tests failed. It took me waaaaay long
to find out why, but it's because DQL kept the `TmpPauliRot` ops around
despite their value originally being 0, while DQ2 did away with them,
and that broke differentiation. I figure this was supposed to be an
optimization, but it's breaking assumptions made by DQ2.

The break is because DQL reaches [jax's trainable-parameter evaluation
code](https://github.com/PennyLaneAI/pennylane/blob/cbc8d19e6a36f0d4d38b44eabd4a8b98db677327/pennylane/interfaces/jax.py#L121-L126)
_before_ filtering out the zeros, thereby enabling differentiation over
those zeros, while DQ2 reaches that code _after_ filtering them out (so
it has no trainable parameters and doesn't differentiate over anything).
The filtering logic assumed DQL's execution order, and the slight
re-arrangement of when we decompose unsupported ops broke this with DQ2.

**Description of the Change:**
`TmpPauliRot.compute_decomposition` does not become nothing if the theta
value is trainable.

**Benefits:**
`SpecialUnitary` differentiation works with DQ2

**Possible Drawbacks:**
I feel like this still might hurt performance, but this is a consequence
of `SpecialUnitary`-implementation-meets-execution-pipeline assumptions.
Fixing that requires a deeper change, and it should probably happen in
`SpecialUnitary`'s code.

---------

Co-authored-by: Romain Moyard <rmoyard@gmail.com>
@dwierichs
Copy link
Contributor

dwierichs commented Sep 27, 2023

Very neat solution, this. :) @timmysilv

edit: Unfortunately, this incurs a performance digression that can not be solved easily.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants