Skip to content

Commit

Permalink
Update learning rate on each backward pass instead of each forward pa…
Browse files Browse the repository at this point in the history
…ss. (#1477)

* change lr scheduler step interval to update every backwards pass instead of every forwards pass

* update CHANGELOG

* fix spacing

* Add TODO to lr schedule update

* remove trailing whitespace

Co-authored-by: William Falcon <waf2107@columbia.edu>
  • Loading branch information
rmrao and williamFalcon committed Apr 20, 2020
1 parent 4fca994 commit 0203938
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 7 deletions.
10 changes: 5 additions & 5 deletions CHANGELOG.md
Expand Up @@ -24,20 +24,21 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Changed the default behaviour to no longer include a NaN check with each training iteration. ([#1475](https://github.com/PyTorchLightning/pytorch-lightning/pull/1475))

- Changed lr schedule step interval behavior to update every backwards pass instead of every forwards pass ([#1476](https://github.com/PyTorchLightning/pytorch-lightning/issues/1476))

- Updated semantic segmentation example with custom u-net and logging ([#1371](https://github.com/PyTorchLightning/pytorch-lightning/pull/1371))

-

### Deprecated

-
-


### Removed

-
-

-
-

### Fixed

Expand All @@ -52,7 +53,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Fixed a bug that caused the `callbacks` Trainer argument to reference a global variable ([#1534](https://github.com/PyTorchLightning/pytorch-lightning/pull/1534)).


## [0.7.3] - 2020-04-09

### Added
Expand Down
7 changes: 5 additions & 2 deletions pytorch_lightning/trainer/training_loop.py
Expand Up @@ -454,8 +454,11 @@ def run_training_epoch(self):
# when returning -1 from train_step, we end epoch early
early_stop_epoch = batch_result == -1

# update lr
self.update_learning_rates(interval='step')
# TODO: consolidate all actions that need to take place only after
# self.accumulate_grad_batches steps (optimizer step, lr update, global step increment)
if (self.batch_idx + 1) % self.accumulate_grad_batches == 0:
# update lr
self.update_learning_rates(interval='step')

# ---------------
# RUN VAL STEP
Expand Down

0 comments on commit 0203938

Please sign in to comment.