Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore batching memory control functionality to LightningGPU #564

Merged
merged 14 commits into from
Nov 24, 2023

Conversation

mlxd
Copy link
Member

@mlxd mlxd commented Nov 22, 2023

Before submitting

Please complete the following checklist when submitting a PR:

  • All new features must include a unit test.
    If you've fixed a bug or added code that should be tested, add a test to the
    tests directory!

  • All new functions and code must be clearly commented and documented.
    If you do make documentation changes, make sure that the docs build and
    render correctly by running make docs.

  • Ensure that the test suite passes, by running make test.

  • Add a new entry to the .github/CHANGELOG.md file, summarizing the
    change, and including a link back to the PR.

  • Ensure that code is properly formatted by running make format.

When all the above are checked, delete everything above the dashed
line and fill in the pull request template.


Context: This PR fixes batching support for LightningGPU devices, and adds preliminary support to enable it for LQ and LK at a later date.

Description of the Change:

Benefits:

Possible Drawbacks:

Related GitHub Issues:

@mlxd mlxd added the ci:use-multi-gpu-runner Enable usage of Multi-GPU runner for this Pull Request label Nov 22, 2023
Copy link

codecov bot commented Nov 22, 2023

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (655d537) 98.69% compared to head (5205b8d) 98.92%.

Files Patch % Lines
pennylane_lightning/core/_serialize.py 93.75% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #564      +/-   ##
==========================================
+ Coverage   98.69%   98.92%   +0.23%     
==========================================
  Files         168      202      +34     
  Lines       22585    27272    +4687     
==========================================
+ Hits        22290    26980    +4690     
+ Misses        295      292       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mlxd mlxd marked this pull request as ready for review November 24, 2023 19:44
const auto first = static_cast<std::size_t>(
std::ceil(obs.size() * i / num_chunks));

auto jac_chunk = futures[i].get();
auto jac_chunk = jac_futures[i].get();
for (std::size_t j = 0; j < jac_chunk.size(); j++) {
std::copy(jac_chunk.begin(), jac_chunk.end(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we would like to use the incoming jac span? Can we add a //TODO here for it?

Copy link
Member

@multiphaseCFD multiphaseCFD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Thanks @mlxd !

Copy link
Contributor

@AmintorDusko AmintorDusko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coverage will need some work. Thanks for your work.

@@ -52,9 +52,12 @@ class QuantumScriptSerializer:
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To keep note, the class docstring is incomplete. It was missing use_mpi and now split_obs.

@mlxd mlxd merged commit 028ad9b into master Nov 24, 2023
77 of 78 checks passed
@mlxd mlxd deleted the fix_lgpu_batching branch November 24, 2023 20:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci:use-multi-gpu-runner Enable usage of Multi-GPU runner for this Pull Request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants