Skip to content

Commit

Permalink
Update docs: Add warning for device_map=None for load_checkpoint_and_…
Browse files Browse the repository at this point in the history
…dispatch (#2308)

* Update docs: Add warning for device_map=None for load_checkpoint_and_dispatch

* Fix style errors.
  • Loading branch information
PhilJd committed Jan 9, 2024
1 parent 5cac878 commit 2241725
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion src/accelerate/big_modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,8 @@ def init_empty_weights(include_buffers: bool = None):
Any model created under this context manager has no weights. As such you can't do something like
`model.to(some_device)` with it. To load weights inside your empty model, see [`load_checkpoint_and_dispatch`].
Make sure to overwrite the default device_map param for [`load_checkpoint_and_dispatch`], otherwise dispatch is not
called.
</Tip>
"""
Expand Down Expand Up @@ -479,7 +481,8 @@ def load_checkpoint_and_dispatch(
name, once a given module name is inside, every submodule of it will be sent to the same device.
To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more
information about each option see [here](big_modeling#designing-a-device-map).
information about each option see [here](../concept_guides/big_model_inference#designing-a-device-map).
Defaults to None, which means [`dispatch_model`] will not be called.
max_memory (`Dict`, *optional*):
A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU
and the available CPU RAM if unset.
Expand Down

0 comments on commit 2241725

Please sign in to comment.