Skip to content

Commit

Permalink
docs fix
Browse files Browse the repository at this point in the history
  • Loading branch information
Scitator committed Feb 7, 2022
1 parent 4e8e77f commit d6849fe
Showing 1 changed file with 12 additions and 11 deletions.
23 changes: 12 additions & 11 deletions catalyst/engines/torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,17 +51,18 @@ class DistributedDataParallelEngine(Engine):
"""Distributed multi-GPU-based engine.
Args:
*args: args for Accelerator.__init__
address: master node (rank 0)'s address, should be either the IP address or the hostname
of node 0, for single node multi-proc training, can simply be 127.0.0.1
port: master node (rank 0)'s free port that needs to be used for communication
during distributed training
world_size: the number of processes to use for distributed training.
Should be less or equal to the number of GPUs
process_group_kwargs: parameters for `torch.distributed.init_process_group`.
More info here:
https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group # noqa: E501, W505
**kwargs: kwargs for Accelerator.__init__
*args: args for Accelerator.__init__
address: master node (rank 0)'s address,
should be either the IP address or the hostname
of node 0, for single node multi-proc training, can simply be 127.0.0.1
port: master node (rank 0)'s free port that needs to be used for communication
during distributed training
world_size: the number of processes to use for distributed training.
Should be less or equal to the number of GPUs
process_group_kwargs: parameters for `torch.distributed.init_process_group`.
More info here:
https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group # noqa: E501, W505
**kwargs: kwargs for Accelerator.__init__
"""

Expand Down

0 comments on commit d6849fe

Please sign in to comment.