Skip to content

Commit 9f6be96

Browse files
DavianYangawaelchliteddykokermergify[bot]
authoredAug 18, 2020
Fix typo in Quick Start/Step-by-step walk-through (Lightning-AI#3007)
* Fix typo in Quick Start/Step-by-step walk-through * Fix typo in Quick Start/Step-by-step walk-through * Fix snippets in lightning module * Remove testblock doctest does not have torch with CUDA, so x.cuda() will fail * Remove test code "..." is not python, so doctests fail * Fix Lightning-AI#3005 * Fix indentation, stage in docs Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Teddy Koker <teddy.koker@gmail.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
1 parent 6939f6f commit 9f6be96

File tree

3 files changed

+12
-10
lines changed

3 files changed

+12
-10
lines changed
 

‎docs/source/introduction_guide.rst

+3-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
.. testsetup:: *
22

33
from pytorch_lightning.core.lightning import LightningModule
4+
from pytorch_lightning.core.datamodule import LightningDataModule
45
from pytorch_lightning.trainer.trainer import Trainer
56

67
.. _introduction-guide:
@@ -259,9 +260,9 @@ In this case, it's better to group the full definition of a dataset into a `Data
259260
- Val dataloader(s)
260261
- Test dataloader(s)
261262

262-
.. code-block:: python
263+
.. testcode:: python
263264

264-
class MyDataModule(pl.DataModule):
265+
class MyDataModule(LightningDataModule):
265266

266267
def __init__(self):
267268
super().__init__()

‎docs/source/lightning-module.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Notice a few things.
5151
5252
# or to init a new tensor
5353
new_x = torch.Tensor(2, 3)
54-
new_x = new_x.type_as(x.type())
54+
new_x = new_x.type_as(x)
5555
5656
5. There are no samplers for distributed, Lightning also does this for you.
5757

‎docs/source/new-project.rst

+8-7
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
.. testsetup:: *
22

33
from pytorch_lightning.core.lightning import LightningModule
4+
from pytorch_lightning.core.datamodule import LightningDataModule
45
from pytorch_lightning.trainer.trainer import Trainer
56
import os
67
import torch
@@ -357,9 +358,9 @@ And the matching code:
357358

358359
|
359360
360-
.. code-block::
361+
.. testcode:: python
361362

362-
class MNISTDataModule(pl.LightningDataModule):
363+
class MNISTDataModule(LightningDataModule):
363364

364365
def __init__(self, batch_size=32):
365366
super().__init__()
@@ -407,20 +408,20 @@ over download/prepare/splitting data
407408

408409
.. code-block:: python
409410
410-
class MyDataModule(pl.DataModule):
411+
class MyDataModule(LightningDataModule):
411412
412413
def prepare_data(self):
413414
# called only on 1 GPU
414415
download()
415416
tokenize()
416417
etc()
417418
418-
def setup(self):
419+
def setup(self, stage=None):
419420
# called on every GPU (assigning state is OK)
420421
self.train = ...
421422
self.val = ...
422423
423-
def train_dataloader(self):
424+
def train_dataloader(self):
424425
# do more...
425426
return self.train
426427
@@ -432,7 +433,7 @@ First, define the information that you might need.
432433

433434
.. code-block:: python
434435
435-
class MyDataModule(pl.DataModule):
436+
class MyDataModule(LightningDataModule):
436437
437438
def __init__(self):
438439
super().__init__()
@@ -444,7 +445,7 @@ First, define the information that you might need.
444445
tokenize()
445446
build_vocab()
446447
447-
def setup(self):
448+
def setup(self, stage=None):
448449
vocab = load_vocab
449450
self.vocab_size = len(vocab)
450451

0 commit comments

Comments
 (0)
Failed to load comments.