This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Description
🚀 The feature, motivation and pitch
Once distributed inference integration into torchchat is functional, let's add a docs/distributed.md with an example, and plumb that example into .ci/scripts/run-docs distributed. (updown.py extracts all commands between triple backticks into a test script.)
torchchat has the same runners as pytorch/pytorch, so at least a minimal 2 or 4 GPU setup on a single node would be great. Not sure whether we can run multi-node testing, you can suppress commands from tests with [skip default]: begin and [skip default]: end around those commands.
cc: @mreso @lessw2020 @kwen2501
Alternatives
None
Additional context
No response
RFC (Optional)
No response