Skip to content

Commit

Permalink
Enhance new_group doc to mention using NCCL concurrently. (#48872)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #48872

Using NCCL communicators concurrently is not safe and this is
documented in NCCL docs.

However, this is not documented in PyTorch and we should add documentation for
ProcessGroupNCCL so that users are aware of this limitation.
ghstack-source-id: 118148014

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25351778

fbshipit-source-id: f7f448dc834c47cc1244f821362f5437dd17ce77
  • Loading branch information
pritamdamania authored and facebook-github-bot committed Dec 9, 2020
1 parent c62f3fc commit 7584161
Showing 1 changed file with 11 additions and 0 deletions.
11 changes: 11 additions & 0 deletions torch/distributed/distributed_c10d.py
Expand Up @@ -2349,6 +2349,17 @@ def new_group(ranks=None, timeout=default_pg_timeout, backend=None):
if they are not going to be members of the group. Additionally, groups
should be created in the same order in all processes.
.. warning::
Using multiple process groups with the ``NCCL`` backend concurrently
is not safe and the user should perform explicit synchronization in
their application to ensure only one process group is used at a time.
This means collectives from one process group should have completed
execution on the device (not just enqueued since CUDA execution is
async) before collectives from another process group are enqueued.
See `Using multiple NCCL communicators concurrently <https://docs.nvid
ia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#using
-multiple-nccl-communicators-concurrently>`_ for more details.
Arguments:
ranks (list[int]): List of ranks of group members. If ``None``, will be
set to all ranks. Default is ``None``.
Expand Down

0 comments on commit 7584161

Please sign in to comment.