Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core][Doc] Default to multiprocessing for single-node distributed case #5230

Merged
merged 13 commits into from
Jun 11, 2024

Conversation

njhill
Copy link
Member

@njhill njhill commented Jun 3, 2024

Also update docs to reflect support for the multiprocessing distributed executor.

Resolves #4955
Resolves #5221

Also update docs to reflect support for the multiprocessing distributed executor.

Resolves vllm-project#4955
Resolves vllm-project#5221
Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@Yard1
Copy link
Collaborator

Yard1 commented Jun 3, 2024

If we detect we are in a Ray placement group (ray.util.get_current_placement_group() is not None), we should still use Ray by default.

vllm/config.py Outdated Show resolved Hide resolved
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Copy link

@rcarrata rcarrata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @njhill !!

@vrdn-23
Copy link

vrdn-23 commented Jun 6, 2024

Would it possible to also make ray optional or as an extra when installing vllm as part of this PR? Currently it's being installed by default as part of the requirements for vllm so it'll always be present?

@njhill
Copy link
Member Author

njhill commented Jun 7, 2024

@vrdn-23 I had considered this and actually have a change for it in a branch since this is what we do in our own builds. But right now there are other dependencies included that could be considered optional as well, such as pillow, outlines, lm_format_enforcer so we should perhaps make those optional extras too. The only one managed that way right now is tensorizer.

Would be good to get thoughts from others on this too.

@rcarrata
Copy link

with all these dependencies (not only affect ray, also other components that may cause any issue), I'd suggest to open another PR from the branch that is suggested by @njhill alongside another GH issue to track the effort.

@simon-mo
Copy link
Collaborator

+1 I'm in favor of optional dependencies and trimming down our dependencies in general. Please open PRs!

@simon-mo
Copy link
Collaborator

@simon-mo simon-mo merged commit 99dac09 into vllm-project:main Jun 11, 2024
19 of 21 checks passed
@njhill njhill deleted the default-dist-executor branch June 11, 2024 18:19
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 12, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
@robertgshaw2-neuralmagic
Copy link
Collaborator

+1

joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
7 participants