Skip to content

Commit

Permalink
Fix DDP documentation (#46861)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #46861

Noticed that in the DDP documentation:
https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html?highlight=distributeddataparallel
there were some examples with `torch.nn.DistributedDataParallel`, fix this to
read `torch.nn.parallel.DistributedDataParallel`.
ghstack-source-id: 115453703

Test Plan: ci

Reviewed By: pritamdamania87, SciPioneer

Differential Revision: D24534486

fbshipit-source-id: 64b92dc8a55136c23313f7926251fe825a2cb7d5
  • Loading branch information
rohan-varma authored and facebook-github-bot committed Oct 29, 2020
1 parent 262bd64 commit ecdbea7
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torch/nn/parallel/distributed.py
Expand Up @@ -329,7 +329,7 @@ class DistributedDataParallel(Module):
Example::
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...')
>>> net = torch.nn.DistributedDataParallel(model, pg)
>>> net = torch.nn.parallel.DistributedDataParallel(model, pg)
"""
def __init__(self, module, device_ids=None,
output_device=None, dim=0, broadcast_buffers=True,
Expand Down Expand Up @@ -626,7 +626,7 @@ def no_sync(self):
Example::
>>> ddp = torch.nn.DistributedDataParallel(model, pg)
>>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg)
>>> with ddp.no_sync():
>>> for input in inputs:
>>> ddp(input).backward() # no synchronization, accumulate grads
Expand Down

0 comments on commit ecdbea7

Please sign in to comment.