-
Notifications
You must be signed in to change notification settings - Fork 6.6k
[cgraph] Avoid depending on torch CPU module for CPU-only actor #53849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: huafengchun <huafengchun@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates AcceleratorContext
to skip importing the torch
CPU backend when the device is CPU-only, removing an unnecessary dependency on torch.cpu
.
- Conditional import of
torch.<backend>
only for non-CPU devices - Eliminates unconditional
torch.cpu
import for CPU-only actors
Comments suppressed due to low confidence (3)
python/ray/experimental/channel/accelerator_context.py:40
- The comment should mention how the CPU case is handled (e.g., that no torch backend is loaded or that
self._torch_mod
will beNone
). This will clarify expected behavior for maintainers.
# Import the torch backend module (e.g., torch.cuda) if the device is not 'cpu'.
python/ray/experimental/channel/accelerator_context.py:1
- Add a unit test for the CPU-only path to verify that
AcceleratorContext
initializes correctly without importing a torch backend and does not raise errors when accessingself._torch_mod
.
def __init__(self, torch_module_name: str, communicator_cls: Type[Communicator])
python/ray/experimental/channel/accelerator_context.py:41
- When
torch_module_name
is "cpu",self._torch_mod
is never defined, which may lead to an AttributeError later. Consider adding anelse
block to defineself._torch_mod
(e.g.,None
or the basetorch
module) or refactor usage to handle a missing backend module.
if torch_module_name != "cpu":
@ruisearch42 @kevin85421 Could you please review this bug fix? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Let me trigger the tests.
…project#53849) Signed-off-by: huafengchun <huafengchun@gmail.com> Signed-off-by: Scott Lee <scott.lee@rebellions.ai>
…project#53849) Signed-off-by: huafengchun <huafengchun@gmail.com>
Signed-off-by: huafengchun <huafengchun@gmail.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
…project#53849) Signed-off-by: huafengchun <huafengchun@gmail.com> Signed-off-by: Goutam V <goutam@anyscale.com>
Why are these changes needed?
In cases where the actor involved in the Compiled Graph exclusively uses the CPU, AcceleratorContext unnecessarily loads torch’s CPU module, resulting in an avoidable dependency.
This PR eliminates that dependency by avoiding the torch module loading for CPU-only actors.
Related issue number
Closes #53716
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.