Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken trainer flags nb #4159

Merged
merged 6 commits into from Oct 15, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
9 changes: 6 additions & 3 deletions notebooks/05-trainer-flags-overview.ipynb
Expand Up @@ -74,7 +74,7 @@
"from torchvision.datasets.mnist import MNIST\n",
"from torchvision import transforms"
],
"execution_count": 2,
"execution_count": null,
"outputs": []
},
{
Expand Down Expand Up @@ -1644,7 +1644,9 @@
"\n",
"2. Iteratively until convergence or maximum number of tries max_trials (default 25) has been reached:\n",
"* Call fit() method of trainer. This evaluates steps_per_trial (default 3) number of training steps. Each training step can trigger an OOM error if the tensors (training batch, weights, gradients ect.) allocated during the steps have a too large memory footprint.\n",
"* If an OOM error is encountered, decrease the batch size, or else -> increase it. How much the batch size is increased/decreased is determined by the chosen strategy.\n",
" * If an OOM error is encountered, decrease the batch size\n",
" * Else increase it.\n",
"* How much the batch size is increased/decreased is determined by the chosen stratrgy.\n",
"\n",
"3. The found batch size is saved to model.hparams.batch_size\n",
"\n",
Expand Down Expand Up @@ -2152,6 +2154,7 @@
"By default Lightning will save a checkpoint in the working directory, which will be updated every epoch.\n",
"\n",
"### Automatic saving\n",
"By default Lightning will save a checkpoint in the end of the first epoch in the working directory, which will be updated every epoch."
]
},
{
Expand Down Expand Up @@ -2570,7 +2573,7 @@
"Lightning has built in integration with various loggers such as TensorBoard, wandb, commet, etc.\n",
"\n",
"\n",
"You can pass any metrics you want to logn during training, like loss, to TrainResult.log, such as loss or image output. Similarly, pass in to EvalReuslt.log anything you want to log during validation step.\n",
"You can pass any metrics you want to log during training to `self.log`, such as loss or accuracy. Similarly, pass in to self.log any metric you want to log during validation step.\n",
"\n",
"These values will be passed in to the logger of your choise. simply pass in any supported logger to logger trainer flag.\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/README.md
Expand Up @@ -10,4 +10,4 @@ You can easily run any of the official notebooks by clicking the 'Open in Colab'
| __Datamodules__ | Learn about DataModules and train a dataset-agnostic model on MNIST and CIFAR10.| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/02-datamodules.ipynb)|
| __GAN__ | Train a GAN on the MNIST Dataset. Learn how to use multiple optimizers in Lightning. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/03-basic-gan.ipynb) |
| __BERT__ | Fine-tune HuggingFace Transformers models on the GLUE Benchmark | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb) |
| __Trainer Flags__ | Overview of the available Lightning `Trainer` flags | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb) |
| __Trainer Flags__ | Overview of the available Lightning `Trainer` flags | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb) |