Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructures preprocess to include building block, extensible transforms #4659

Merged
merged 25 commits into from
Oct 16, 2023

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Oct 6, 2023

The current devices.qubit.preprocess module has several transforms optimized for default qubit. Other devices would need to rewrite similar code in order to extend the behavior.

This PR restructures the module so that devices.preprocess now offers the following transforms:

  • decompose: this accepts a stopping condition and decomposes operations until the stopping condition is met
  • validate_measurements: this accepts a stopping_condition for observables. It validates all observables are accepted, only sample based measurements are present when sampling, and only state based measurements are present when analytic
  • validate_device_wires: This makes sure device wire constraints are met, and fills in all wires for measurements like StateMP and ProbabilityMP that act on all available wires.
  • validate_multiprocessing_workers: Checks that the CPU is not oversubscribed for a given number of workers.
  • warn_about_trainable_observables: raises a warning is any observable are trainable

For example, a plugin with a list of supported operations can use the decompose transform by doing:

def stopping_condition(op: qml.operation.Operator) -> bool:
    return op.name in supported_operations

transform_program.add_transform(decompose, stopping_condition=stopping_condition)

Overview of all the changes:

So this PR ended up changing a lot of lines of code, but most of them are moving tests around.

As the tests for default qubit started being quite extensive, I added a folder tests/devices/default_qubit and moved the default qubit tests there over two files.

Any test for default qubit specific preprocessing went into tests/devices/default_qubit/test_default_qubit_preprocessing.py.

More general preprocessing tests are in tests/devices/test_preprocessing.py.

Since the transforms are no longer default qubit specific, they are moved out of the qubit folder.

Since DefaultQubit.preprocess was getting rather long, I split out two private helper methods, _add_adjoint_transforms and _setup_execution_config. This change just keeps DefaultQubit.preprocess a little more manage able.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 6, 2023

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

Copy link
Contributor

@AmintorDusko AmintorDusko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments and fixes for typos.

pennylane/devices/default_qubit.py Outdated Show resolved Hide resolved
pennylane/devices/default_qubit.py Outdated Show resolved Hide resolved
pennylane/devices/default_qubit.py Outdated Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/default_qubit.py Show resolved Hide resolved
Co-authored-by: Amintor Dusko <87949283+AmintorDusko@users.noreply.github.com>
pennylane/devices/__init__.py Outdated Show resolved Hide resolved
pennylane/devices/default_qubit.py Show resolved Hide resolved
pennylane/devices/default_qubit.py Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
@albi3ro
Copy link
Contributor Author

albi3ro commented Oct 10, 2023

[sc-47192]

@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

All modified lines are covered by tests ✅

Comparison is base (e68a56f) 99.64% compared to head (75d9699) 99.64%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #4659   +/-   ##
=======================================
  Coverage   99.64%   99.64%           
=======================================
  Files         376      376           
  Lines       33810    33828   +18     
=======================================
+ Hits        33689    33707   +18     
  Misses        121      121           
Files Coverage Δ
pennylane/devices/__init__.py 100.00% <ø> (ø)
pennylane/devices/default_qubit.py 100.00% <100.00%> (ø)
pennylane/devices/preprocess.py 100.00% <100.00%> (ø)
pennylane/devices/qubit/__init__.py 100.00% <ø> (ø)
pennylane/transforms/defer_measurements.py 100.00% <100.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@trbromley trbromley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @albi3ro! I think this is great, but we need to polish the docstrings with an explanation of what each transform does, when a dev would use it, and some examples.

Also a more philosophical question - why do we need these to be transforms?

pennylane/devices/preprocess.py Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/preprocess.py Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
@albi3ro
Copy link
Contributor Author

albi3ro commented Oct 12, 2023

Thanks @albi3ro! I think this is great, but we need to polish the docstrings with an explanation of what each transform does, when a dev would use it, and some examples.

Also a more philosophical question - why do we need these to be transforms?

These will be objects in a TransformProgram.

We can either use TransformProgram.add_transform(transform, *args, **kwargs) as we are doing now or:

bound_transform = TransformContainer(preprocessing_func, args=args, kwargs=kwargs)
TransformProgram.push_pack(bound_transform)

The option we're using right now just looks a bit cleaner.

Copy link
Contributor

@AmintorDusko AmintorDusko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @albi3ro, I have a few more suggestions.

pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/default_qubit.py Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/devices/preprocess.py Outdated Show resolved Hide resolved
pennylane/transforms/defer_measurements.py Outdated Show resolved Hide resolved
albi3ro and others added 2 commits October 13, 2023 08:50
Co-authored-by: Amintor Dusko <87949283+AmintorDusko@users.noreply.github.com>
Co-authored-by: Amintor Dusko <87949283+AmintorDusko@users.noreply.github.com>
Copy link
Contributor

@AmintorDusko AmintorDusko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @albi3ro, thank you for that!

Copy link
Contributor

@vincentmr vincentmr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing on it after everyone else and nothing to report. Looks good to me! Thanks @albi3ro

@albi3ro albi3ro enabled auto-merge (squash) October 13, 2023 21:35
@albi3ro albi3ro merged commit 4c3ab72 into master Oct 16, 2023
39 checks passed
@albi3ro albi3ro deleted the cleaning-preproces branch October 16, 2023 18:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants