Skip to content

Commit ee72271

Browse files
humandotlearningrohitgr7
andauthoredSep 2, 2020
Docs/improved multigpu doc clarity (Lightning-AI#3194)
* + clarifying doc * improved documentation clarity * docs minor correction * remove device from type_as Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
1 parent b951097 commit ee72271

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed
 

‎docs/source/multi_gpu.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ This will make your code scale to any arbitrary number of GPUs or TPUs with Ligh
4848
# with lightning
4949
def forward(self, x):
5050
z = torch.Tensor(2, 3)
51-
z = z.type_as(x, device=self.device)
51+
z = z.type_as(x)
5252

5353
The :class:`~pytorch_lightning.core.lightning.LightningModule` knows what device it is on. You can access the reference via `self.device`.
5454
Sometimes it is necessary to store tensors as module attributes. However, if they are not parameters they will
@@ -149,7 +149,7 @@ a comma separated list of GPU ids:
149149
.. testcode::
150150
:skipif: torch.cuda.device_count() < 2
151151

152-
# DEFAULT (int) specifies how many GPUs to use
152+
# DEFAULT (int) specifies how many GPUs to use per node
153153
Trainer(gpus=k)
154154

155155
# Above is equivalent to

‎pytorch_lightning/trainer/trainer.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ def __init__(
233233
234234
num_nodes: number of GPU nodes for distributed training.
235235
236-
gpus: Which GPUs to train on.
236+
gpus: number of gpus to train on (int) or which GPUs to train on (list or str) applied per node
237237
238238
auto_select_gpus:
239239

0 commit comments

Comments
 (0)
Failed to load comments.