Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consolidate and contribute back changes from QHack 2022 #3

Merged
merged 1 commit into from
Mar 15, 2022

Conversation

amirebrahimi
Copy link
Contributor

@amirebrahimi amirebrahimi commented Feb 26, 2022

Hi Cenk,

First off -- thank you for supporting us during QHack 2022.

I hope this contribution back to QTrkX will be a return on your investment. You're welcome to take all, some, or none of the changes depending on what you think they would benefit the project.

For reference, our final QHack submission is here, which includes our report.

This PR contains the following changes:

  • Additional ansatze:
    • '10_2' that we found improved precision by 6% (our main finding) when compared against '10' on a smaller dataset
    • '10_3' that also performed as well as '10', but randomly chooses rx, ry, and rz gates, so may be interesting to analyze
    • '10' w/ the two-design you mentioned might be helpful (removes the circular entanglement)
    • A version of '10' that makes use of the identity blocks as mentioned in this TF tutorial, which is originally from Grant, 2019
    • A utility function that can generate versions of identity blocks for any of the other ansatze
  • Replacement of sigmoid activations with relu followed by rescaling layers ([0-1] range), which removes the immediate vanishing gradients that show up in the model

For the final point, please note that only qc10_pqc has been modified to work with the utility function. For any others you'd like to try, you will need to:

  • support a symbol_offset optional parameter
  • return the symbols from the PQC from the function

See qc10_pqc_identity for an example.

Also, we have a notebook that is an adaptation of the TensorFlow tutorial mentioned above that works to analyze the gradient variance in various ansatze (it loads this info from the circuits_metadata.json. We found this useful to quickly analyze the existing PQCs outside of the training pipeline. This notebook is not included in this PR, but you can find a copy here.

An example graph:
image

Copy link
Contributor

@cnktysz cnktysz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Amir and others,
Thank you for the suggested changes. I think they look good and definitely a good step forward making QGNNs possible. I was thinking if we should make these changes as a separate file but now I think it is not necessary and the commit name should suggest future users the changes.

Once again thank your the great work!

@cnktysz cnktysz merged commit c76a52b into QTrkX:main Mar 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants