Skip to content

Commit

Permalink
[dist_optim] update the doc of DistributedOptimizer
Browse files Browse the repository at this point in the history
updating the doc of DistributedOptimizer to include TorchScript enablement information

ghstack-source-id: 2fd88164f11550a21d8bf3e4f119c76e50641204
Pull Request resolved: #51314
  • Loading branch information
wanchaol committed Jan 29, 2021
1 parent d035d56 commit 9719936
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions torch/distributed/optim/optimizer.py
Expand Up @@ -150,6 +150,13 @@ class DistributedOptimizer:
to the latest forward pass executed on a given worker. Also, there is no
guaranteed ordering across workers.
`DistributedOptimizer` creates the local optimizer with TorchScript enabled
by default, so that optimizer updates are not blocked by the Python Global
Interpreter Lock (GIL) during multithreaded training (e.g. Distributed Model
Parallel). This feature is currently in beta stage, enabled for optimizers
including `Adagrad`, `Adam`, `SGD`, `RMSprop`, `AdamW` and `Adadelta`. We
are increasing the coverage to all optimizers in future releases.
Args:
optimizer_class (optim.Optimizer): the class of optimizer to
instantiate on each worker.
Expand Down

0 comments on commit 9719936

Please sign in to comment.