This repository was archived by the owner on Sep 10, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 248
[Distributed Inference] Make torch run work for torchchat and fix TP bugs #877
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/877
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 655ea0f with merge base c716548 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
lessw2020
reviewed
Jul 2, 2024
lessw2020
approved these changes
Jul 2, 2024
Contributor
lessw2020
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for adding, esp for the OOM (device 0) fix.
tiny nit to update the one tp comment and remove ref to seq parallel since it's not being used now.
vmpuri
pushed a commit
that referenced
this pull request
Jul 8, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
malfet
pushed a commit
that referenced
this pull request
Jul 17, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
malfet
pushed a commit
that referenced
this pull request
Jul 17, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
malfet
pushed a commit
that referenced
this pull request
Jul 17, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
malfet
pushed a commit
that referenced
this pull request
Jul 17, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
malfet
pushed a commit
that referenced
this pull request
Jul 17, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
malfet
pushed a commit
that referenced
this pull request
Jul 17, 2024
…bugs (#877) * [Distributed Inference] Make torch run work for torchchat
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Somehow in TorchChat, we only set device to be "cuda" which makes everyone use cuda:0 and leads to CUDA OOM when it comes to checkpoint loading. And now I can run all the way until the prompt is showing up. But somehow we now need to enter so many times for each rank so this is something we need to solve next.
Also for TP part, we need to use TP not sequence parallel like what we did for training.
To test torchrun DI, one can just run
./distributed/run_dist_inference.shto run the DI program