2 files changed +3
-3
lines changed Original file line number Diff line number Diff line change @@ -48,7 +48,7 @@ This will make your code scale to any arbitrary number of GPUs or TPUs with Ligh
48
48
# with lightning
49
49
def forward(self, x):
50
50
z = torch.Tensor(2, 3)
51
- z = z.type_as(x, device=self.device )
51
+ z = z.type_as(x)
52
52
53
53
The :class: `~pytorch_lightning.core.lightning.LightningModule ` knows what device it is on. You can access the reference via `self.device `.
54
54
Sometimes it is necessary to store tensors as module attributes. However, if they are not parameters they will
@@ -149,7 +149,7 @@ a comma separated list of GPU ids:
149
149
.. testcode ::
150
150
:skipif: torch.cuda.device_count() < 2
151
151
152
- # DEFAULT (int) specifies how many GPUs to use
152
+ # DEFAULT (int) specifies how many GPUs to use per node
153
153
Trainer(gpus=k)
154
154
155
155
# Above is equivalent to
Original file line number Diff line number Diff line change @@ -233,7 +233,7 @@ def __init__(
233
233
234
234
num_nodes: number of GPU nodes for distributed training.
235
235
236
- gpus: Which GPUs to train on.
236
+ gpus: number of gpus to train on (int) or which GPUs to train on (list or str) applied per node
237
237
238
238
auto_select_gpus:
239
239
0 commit comments