Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve information about group offloading and layerwise casting #11101

Merged
merged 6 commits into from
Mar 24, 2025

Conversation

a-r-r-o-w
Copy link
Member

No description provided.

@a-r-r-o-w a-r-r-o-w requested a review from DN6 March 18, 2025 06:42
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@@ -235,6 +246,13 @@ In the above example, layerwise casting is enabled on the transformer component

However, you gain more control and flexibility by directly utilizing the [`~hooks.layerwise_casting.apply_layerwise_casting`] function instead of [`~ModelMixin.enable_layerwise_casting`].

<Tip>

- Layerwise casting may not work with all models out-of-the-box. Sometimes, the forward implementations of the model contain weight-dependent typecasting of inputs. Such implementations are not supported due to the currently simplistic implementation of layerwise casting, which assumes that the forward pass is independent of the weight precision and that the input dtypes are always in `compute_dtype`. An example of an incompatible implementation can be found [here](https://github.com/huggingface/transformers/blob/7f5077e53682ca855afc826162b204ebf809f1f9/src/transformers/models/t5/modeling_t5.py#L294-L299).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should also mention that it can be disabled on modules with the skip patterns.

@@ -198,6 +198,17 @@ export_to_video(video, "output.mp4", fps=8)

Group offloading (for CUDA devices with support for asynchronous data transfer streams) overlaps data transfer and computation to reduce the overall execution time compared to sequential offloading. This is enabled using layer prefetching with CUDA streams. The next layer to be executed is loaded onto the accelerator device while the current layer is being executed - this increases the memory requirements slightly. Group offloading also supports leaf-level offloading (equivalent to sequential CPU offloading) but can be made much faster when using streams.

<Tip>

- Group offloading may not work with all models out-of-the-box. If the forward implementations of the model contain weight-dependent device-casting of inputs, it may clash with the offloading mechanism's handling of device-casting.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should also mention that it can be disabled on modules with the skip patterns.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't support skipping in group offloading. Will mention for layerwise casting though

a-r-r-o-w and others added 2 commits March 18, 2025 14:33
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
@a-r-r-o-w a-r-r-o-w requested a review from DN6 March 18, 2025 09:11
@a-r-r-o-w
Copy link
Member Author

Failing test looks unrelated

@a-r-r-o-w a-r-r-o-w merged commit 1ddf3f3 into main Mar 24, 2025
14 of 15 checks passed
@a-r-r-o-w a-r-r-o-w deleted the improve-info-layerwise-and-group branch March 24, 2025 17:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants