Skip to content

Commit bc67689

Browse files
williamFalconBorda
andauthored
clean v2 docs (Lightning-AI#691)
* updated gitignore * Update README.md * updated gitignore * updated links in ninja file * updated docs * Update README.md * Update README.md * finished callbacks * finished callbacks * finished callbacks * fixed left menu * added callbacks to menu * added direct links to docs * added direct links to docs * added direct links to docs * added direct links to docs * added direct links to docs * fixing TensorBoard (Lightning-AI#687) * flake8 * fix typo * fix tensorboardlogger drop test_tube dependence * formatting * fix tensorboard & tests * upgrade Tensorboard * test formatting separately * try to fix JIT issue * add tests for 1.4 * added direct links to docs * updated gitignore * updated links in ninja file * updated docs * finished callbacks * finished callbacks * finished callbacks * fixed left menu * added callbacks to menu * added direct links to docs * added direct links to docs * added direct links to docs * added direct links to docs * added direct links to docs * added direct links to docs * finished rebase * making private members * making private members * making private members * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * set auto dp if no backend * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * working on trainer docs * fixed lightning import * cleared spaces * cleared spaces * cleared spaces * cleared spaces * cleared spaces * cleared spaces * cleared spaces * cleared spaces * cleared spaces * cleared spaces * finished lightning module * finished lightning module * finished lightning module * finished lightning module * added callbacks * added loggers * added loggers * added loggers * added loggers * added loggers * added loggers * added loggers * added loggers * set auto dp if no backend * added loggers * added loggers * added loggers * added loggers * added loggers * added loggers * flake 8 * flake 8 Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
1 parent bde549c commit bc67689

File tree

22 files changed

+1158
-657
lines changed

22 files changed

+1158
-657
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ tests/save_dir
1414
default/
1515
lightning_logs/
1616
tests/tests/
17+
*.rst
18+
/docs/source/*.md
1719

1820
# Byte-compiled / optimized / DLL files
1921
__pycache__/
Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
{%- set external_urls = {
2-
'github': 'https://github.com/williamFalcon/pytorch-lightning',
3-
'github_issues': 'https://github.com/williamFalcon/pytorch-lightning/issues',
4-
'contributing': 'https://github.com/williamFalcon/pytorch-lightning/blob/master/CONTRIBUTING.md',
5-
'docs': 'https://williamfalcon.github.io/pytorch-lightning',
2+
'github': 'https://github.com/PytorchLightning/pytorch-lightning',
3+
'github_issues': 'https://github.com/PytorchLightning/pytorch-lightning/issues',
4+
'contributing': 'https://github.com/PytorchLightning/pytorch-lightning/blob/master/CONTRIBUTING.md',
5+
'docs': 'https://pytorchlightning.github.io/pytorch-lightning',
66
'twitter': 'https://twitter.com/PyTorchLightnin',
77
'discuss': 'https://discuss.pytorch.org',
8-
'tutorials': 'https://williamfalcon.github.io/pytorch-lightning/',
9-
'previous_pytorch_versions': 'https://williamfalcon.github.io/pytorch-lightning/',
10-
'home': 'https://williamfalcon.github.io/pytorch-lightning/',
11-
'get_started': 'https://williamfalcon.github.io/pytorch-lightning/',
12-
'features': 'https://williamfalcon.github.io/pytorch-lightning/',
13-
'blog': 'https://williamfalcon.github.io/pytorch-lightning/',
14-
'resources': 'https://williamfalcon.github.io/pytorch-lightning/',
15-
'support': 'https://williamfalcon.github.io/pytorch-lightning/',
8+
'tutorials': 'https://pytorchlightning.github.io/pytorch-lightning/',
9+
'previous_pytorch_versions': 'https://pytorchlightning.github.io/pytorch-lightning/',
10+
'home': 'https://pytorchlightning.github.io/pytorch-lightning/',
11+
'get_started': 'https://pytorchlightning.github.io/pytorch-lightning/',
12+
'features': 'https://pytorchlightning.github.io/pytorch-lightning/',
13+
'blog': 'https://pytorchlightning.github.io/pytorch-lightning/',
14+
'resources': 'https://pytorchlightning.github.io/pytorch-lightning/',
15+
'support': 'https://pytorchlightning.github.io/pytorch-lightning/',
1616
}
1717
-%}

docs/source/conf.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@
8383
'sphinx.ext.autosummary',
8484
'sphinx.ext.napoleon',
8585
'recommonmark',
86+
'sphinx.ext.autosectionlabel',
8687
# 'm2r',
8788
'nbsphinx',
8889
]

docs/source/examples.rst

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,34 @@
1-
Examples & Tutorials
2-
====================
1+
GAN
2+
====
3+
.. toctree::
4+
:maxdepth: 3
5+
6+
pl_examples.domain_templates.gan
7+
8+
MNIST
9+
====
10+
.. toctree::
11+
:maxdepth: 3
12+
13+
pl_examples.basic_examples.lightning_module_template
14+
15+
Multi-node (ddp) MNIST
16+
====
17+
.. toctree::
18+
:maxdepth: 3
19+
20+
pl_examples.multi_node_examples.multi_node_ddp_demo
21+
22+
Multi-node (ddp2) MNIST
23+
====
24+
.. toctree::
25+
:maxdepth: 3
326

27+
pl_examples.multi_node_examples.multi_node_ddp2_demo
428

29+
Imagenet
30+
====
531
.. toctree::
632
:maxdepth: 3
733

8-
pl_examples
34+
pl_examples.full_examples.imagenet.imagenet_example

docs/source/index.rst

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,23 +3,47 @@
33
You can adapt this file completely to your liking, but it should at least
44
contain the root `toctree` directive.
55
6-
Welcome to PyTorch-Lightning!
6+
PyTorch-Lightning Documentation
77
=============================
88

99
.. toctree::
10-
:maxdepth: 4
10+
:maxdepth: 1
1111
:name: start
12-
:caption: Quick Start
12+
:caption: Start Here
1313

1414
new-project
15-
examples
1615

1716
.. toctree::
1817
:maxdepth: 4
1918
:name: docs
20-
:caption: Docs
19+
:caption: Python API
20+
21+
callbacks
22+
lightning-module
23+
logging
24+
trainer
25+
26+
.. toctree::
27+
:maxdepth: 1
28+
:name: Examples
29+
:caption: Examples
30+
31+
examples
32+
33+
.. toctree::
34+
:maxdepth: 1
35+
:name: Tutorials
36+
:caption: Tutorials
37+
38+
tutorials
39+
40+
.. toctree::
41+
:maxdepth: 1
42+
:name: Common Use Cases
43+
:caption: Common Use Cases
44+
45+
common-cases
2146

22-
documentation
2347

2448
.. toctree::
2549
:maxdepth: 1
@@ -29,6 +53,7 @@ Welcome to PyTorch-Lightning!
2953
CODE_OF_CONDUCT.md
3054
CONTRIBUTING.md
3155
BECOMING_A_CORE_CONTRIBUTOR.md
56+
governance.md
3257

3358

3459
Indices and tables

docs/source/new-project.rst

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
Quick Start
22
===========
3-
To start a new project define two files, a LightningModule and a Trainer file.
4-
To illustrate Lightning power and simplicity, here's an example of a typical research flow.
3+
| To start a new project define two files, a LightningModule and a Trainer file.
4+
| To illustrate the power of Lightning and its simplicity, here's an example of a typical research flow.
55
66
Case 1: BERT
77
------------
88

9-
Let's say you're working on something like BERT but want to try different ways of training or even different networks.
10-
You would define a single LightningModule and use flags to switch between your different ideas.
9+
| Let's say you're working on something like BERT but want to try different ways of training or even different networks.
10+
| You would define a single LightningModule and use flags to switch between your different ideas.
1111
1212
.. code-block:: python
1313
@@ -66,6 +66,7 @@ Then you could do rapid research by switching between these two and using the sa
6666
6767
**Notice a few things about this flow:**
6868

69-
1. You're writing pure PyTorch... no unnecessary abstractions or new libraries to learn.
70-
2. You get free GPU and 16-bit support without writing any of that code in your model.
71-
3. You also get all of the capabilities below (without coding or testing yourself).
69+
1. You're writing pure PyTorch... no unnecessary abstractions or new libraries to learn.
70+
2. You get free GPU and 16-bit support without writing any of that code in your model.
71+
3. You also get early stopping, multi-gpu training, 16-bit and MUCH more without coding anything!
72+

pytorch_lightning/callbacks/pt_callbacks.py

Lines changed: 80 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,9 @@
1+
"""
2+
Callbacks
3+
====================================
4+
Callbacks supported by Lightning
5+
"""
6+
17
import os
28
import shutil
39
import logging
@@ -8,26 +14,7 @@
814

915

1016
class Callback(object):
11-
"""Abstract base class used to build new callbacks.
12-
13-
# Properties
14-
* params: dict. Training parameters
15-
(eg. verbosity, batch size, number of epochs...).
16-
Reference of the model being trained.
17-
18-
The `logs` dictionary that callback methods take as argument will contain keys
19-
for quantities relevant to the current batch or epoch.
20-
Currently, the `.fit()` method of the `Sequential` model class will include the following
21-
quantities in the `logs` that it passes to its callbacks:
22-
* on_epoch_end: logs include `acc` and `loss`, and
23-
optionally include `val_loss`
24-
(if validation is enabled in `fit`), and `val_acc`
25-
(if validation and accuracy monitoring are enabled).
26-
* on_batch_begin: logs include `size`,
27-
the number of samples in the current batch.
28-
* on_batch_end: logs include `loss`, and optionally `acc`
29-
(if accuracy monitoring is enabled).
30-
17+
r"""Abstract base class used to build new callbacks.
3118
"""
3219

3320
def __init__(self):
@@ -43,12 +30,30 @@ def set_model(self, model):
4330
self.model = model
4431

4532
def on_epoch_begin(self, epoch, logs=None):
33+
"""
34+
called when the epoch begins
35+
36+
Args:
37+
epoch (int): current epoch
38+
logs (dict): key-value pairs of quantities to monitor
39+
40+
Example:
41+
42+
on_epoch_begin(epoch=2, logs={'val_loss': 0.2})
43+
"""
4644
pass
4745

4846
def on_epoch_end(self, epoch, logs=None):
4947
pass
5048

5149
def on_batch_begin(self, batch, logs=None):
50+
"""
51+
called when the batch starts.
52+
53+
Args:
54+
batch (Tensor): current batch tensor
55+
logs (dict): key-value pairs of quantities to monitor
56+
"""
5257
pass
5358

5459
def on_batch_end(self, batch, logs=None):
@@ -62,25 +67,33 @@ def on_train_end(self, logs=None):
6267

6368

6469
class EarlyStopping(Callback):
65-
"""Stop training when a monitored quantity has stopped improving.
70+
r"""
71+
Stop training when a monitored quantity has stopped improving.
6672
67-
# Arguments
68-
monitor: quantity to be monitored.
69-
min_delta: minimum change in the monitored quantity
73+
Args:
74+
monitor (str): quantity to be monitored.
75+
min_delta (float): minimum change in the monitored quantity
7076
to qualify as an improvement, i.e. an absolute
7177
change of less than min_delta, will count as no
7278
improvement.
73-
patience: number of epochs with no improvement
79+
patience (int): number of epochs with no improvement
7480
after which training will be stopped.
75-
verbose: verbosity mode.
76-
mode: one of {auto, min, max}. In `min` mode,
81+
verbose (bool): verbosity mode.
82+
mode (str): one of {auto, min, max}. In `min` mode,
7783
training will stop when the quantity
7884
monitored has stopped decreasing; in `max`
7985
mode it will stop when the quantity
8086
monitored has stopped increasing; in `auto`
8187
mode, the direction is automatically inferred
8288
from the name of the monitored quantity.
8389
90+
Example::
91+
92+
from pytorch_lightning import Trainer
93+
from pytorch_lightning.callbacks import EarlyStopping
94+
95+
early_stopping = EarlyStopping('val_loss')
96+
Trainer(early_stop_callback=early_stopping)
8497
"""
8598

8699
def __init__(self, monitor='val_loss',
@@ -150,20 +163,22 @@ def on_train_end(self, logs=None):
150163

151164

152165
class ModelCheckpoint(Callback):
153-
"""Save the model after every epoch.
154-
155-
The `filepath` can contain named formatting options,
156-
which will be filled the value of `epoch` and
157-
keys in `logs` (passed in `on_epoch_end`).
158-
For example: if `filepath` is `weights.{epoch:02d}-{val_loss:.2f}.hdf5`,
159-
then the model checkpoints will be saved with the epoch number and
160-
the validation loss in the filename.
161-
162-
# Arguments
163-
filepath: string, path to save the model file.
164-
monitor: quantity to monitor.
165-
verbose: verbosity mode, 0 or 1.
166-
save_top_k: if `save_top_k == k`,
166+
r"""
167+
168+
Save the model after every epoch.
169+
170+
Args:
171+
filepath (str): path to save the model file.
172+
Can contain named formatting options to be auto-filled.
173+
174+
Example::
175+
176+
# save epoch and val_loss in name
177+
ModelCheckpoint(filepath='{epoch:02d}-{val_loss:.2f}.hdf5')
178+
# saves file like: /path/epoch_2-val_loss_0.2.hdf5
179+
monitor (str): quantity to monitor.
180+
verbose (bool): verbosity mode, 0 or 1.
181+
save_top_k (int): if `save_top_k == k`,
167182
the best k models according to
168183
the quantity monitored will be saved.
169184
if `save_top_k == 0`, no models are saved.
@@ -172,19 +187,28 @@ class ModelCheckpoint(Callback):
172187
if `save_top_k >= 2` and the callback is called multiple
173188
times inside an epoch, the name of the saved file will be
174189
appended with a version count starting with `v0`.
175-
mode: one of {auto, min, max}.
190+
mode (str): one of {auto, min, max}.
176191
If `save_top_k != 0`, the decision
177192
to overwrite the current save file is made
178193
based on either the maximization or the
179194
minimization of the monitored quantity. For `val_acc`,
180195
this should be `max`, for `val_loss` this should
181196
be `min`, etc. In `auto` mode, the direction is
182197
automatically inferred from the name of the monitored quantity.
183-
save_weights_only: if True, then only the model's weights will be
198+
save_weights_only (bool): if True, then only the model's weights will be
184199
saved (`model.save_weights(filepath)`), else the full model
185200
is saved (`model.save(filepath)`).
186-
period: Interval (number of epochs) between checkpoints.
201+
period (int): Interval (number of epochs) between checkpoints.
202+
203+
Example::
187204
205+
from pytorch_lightning import Trainer
206+
from pytorch_lightning.callbacks import ModelCheckpoint
207+
208+
checkpoint_callback = ModelCheckpoint(filepath='my_path')
209+
Trainer(checkpoint_callback=checkpoint_callback)
210+
211+
# saves checkpoints to my_path whenever 'val_loss' has a new min
188212
"""
189213

190214
def __init__(self, filepath, monitor='val_loss', verbose=0,
@@ -330,11 +354,20 @@ def on_epoch_end(self, epoch, logs=None):
330354

331355

332356
class GradientAccumulationScheduler(Callback):
333-
"""Change gradient accumulation factor according to scheduling.
357+
r"""
358+
Change gradient accumulation factor according to scheduling.
359+
360+
Args:
361+
scheduling (dict): scheduling in format {epoch: accumulation_factor}
362+
363+
Example::
334364
335-
# Arguments
336-
scheduling: dict, scheduling in format {epoch: accumulation_factor}
365+
from pytorch_lightning import Trainer
366+
from pytorch_lightning.callbacks import GradientAccumulationScheduler
337367
368+
# at epoch 5 start accumulating every 2 batches
369+
accumulator = GradientAccumulationScheduler(scheduling: {5: 2})
370+
Trainer(accumulate_grad_batches=accumulator)
338371
"""
339372

340373
def __init__(self, scheduling: dict):

0 commit comments

Comments
 (0)