Skip to content

Commit

Permalink
remove deprecated API for v0.8 (#2073)
Browse files Browse the repository at this point in the history
* remove deprecated API

* chlog

* times

* missed

* formatting check

* missing

* missing

* miss

* fix docs build error

* fix pep whitespace error

* docs

* wip

* amp_level

* amp_level

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
  • Loading branch information
Borda and awaelchli committed Jun 12, 2020
1 parent 08573d0 commit 2674976
Show file tree
Hide file tree
Showing 34 changed files with 94 additions and 533 deletions.
2 changes: 1 addition & 1 deletion .drone.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ steps:
#- pip install -r ./docs/requirements.txt --user -q
- pip list
- python -c "import torch ; print(' & '.join([torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())]) if torch.cuda.is_available() else 'only CPU')"
- coverage run --source pytorch_lightning -m py.test pytorch_lightning tests benchmarks -v # --flake8
- coverage run --source pytorch_lightning -m py.test pytorch_lightning tests benchmarks -v --durations=25 # --flake8
#- cd docs; make doctest; make coverage
- coverage report
- codecov --token $CODECOV_TOKEN # --pr $DRONE_PULL_REQUEST --build $DRONE_BUILD_NUMBER --branch $DRONE_BRANCH --commit $DRONE_COMMIT --tag $DRONE_TAG
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
name: "Check Formatting"
on: [push, pull_request]
name: "Check Formatting - Black"
on:
# Trigger the workflow on push or pull request,
# but only for the master branch
push:
branches:
- master
pull_request:
branches:
- master

jobs:
check_code_formatting:
Expand Down
8 changes: 7 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Removed

- Removed unintended Trainer argument `progress_bar_callback`, the callback should be passed in by `Trainer(callbacks=[...])` instead ([#1855](https://github.com/PyTorchLightning/pytorch-lightning/pull/1855))
- Remove obsolete `self._device` in Trainer ([#1849](https://github.com/PyTorchLightning/pytorch-lightning/pull/1849))
- Removed obsolete `self._device` in Trainer ([#1849](https://github.com/PyTorchLightning/pytorch-lightning/pull/1849))
- Removed deprecated API ([#2073](https://github.com/PyTorchLightning/pytorch-lightning/pull/2073))
* Packages: `pytorch_lightning.pt_overrides`, `pytorch_lightning.root_module`
* Modules: `pytorch_lightning.logging.comet_logger`, `pytorch_lightning.logging.mlflow_logger`, `pytorch_lightning.logging.test_tube_logger`, `pytorch_lightning.overrides.override_data_parallel`, `pytorch_lightning.core.model_saving`, `pytorch_lightning.core.root_module`
* Trainer arguments: `add_row_log_interval`, `default_save_path`, `gradient_clip`, `nb_gpu_nodes`, `max_nb_epochs`, `min_nb_epochs`, `nb_sanity_val_steps`
* Trainer attributes: `nb_gpu_nodes`, `num_gpu_nodes`, `gradient_clip`, `max_nb_epochs`, `min_nb_epochs`, `nb_sanity_val_steps`, `default_save_path`, `tng_tqdm_dic`

### Fixed

Expand Down Expand Up @@ -102,6 +107,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Deprecated

- Deprecated `tags_csv` in favor of `hparams_file` ([#1271](https://github.com/PyTorchLightning/pytorch-lightning/pull/1271))
- Deprecated `amp_level` in favor of native AMP ([#1561](https://github.com/PyTorchLightning/pytorch-lightning/pull/1561))

### Fixed

Expand Down
8 changes: 1 addition & 7 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,13 +141,7 @@
'api/modules.rst',

# deprecated/renamed:
'api/pytorch_lightning.loggers.comet_logger.rst', # TODO: remove in v0.8.0
'api/pytorch_lightning.loggers.mlflow_logger.rst', # TODO: remove in v0.8.0
'api/pytorch_lightning.loggers.test_tube_logger.rst', # TODO: remove in v0.8.0
'api/pytorch_lightning.callbacks.pt_callbacks.*', # TODO: remove in v0.8.0
'api/pytorch_lightning.pt_overrides.*', # TODO: remove in v0.8.0
'api/pytorch_lightning.root_module.*', # TODO: remove in v0.8.0
'api/pytorch_lightning.logging.*', # TODO: remove in v0.8.0
'api/pytorch_lightning.logging.*', # TODO: remove in v0.9.0
]

# The name of the Pygments (syntax highlighting) style to use.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/weights_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ To change the checkpoint path pass in:

.. testcode::

trainer = Trainer(default_save_path='/your/path/to/save/checkpoints')
trainer = Trainer(default_root_dir='/your/path/to/save/checkpoints')

To modify the behavior of checkpointing pass in your own callback.

Expand Down
2 changes: 0 additions & 2 deletions pytorch_lightning/core/hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,6 @@ def backward(self, use_amp, loss, optimizer):

if self.trainer.use_native_amp:
self.trainer.scaler.scale(loss).backward()

# TODO: remove in v0.8.0
else:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
Expand Down
4 changes: 1 addition & 3 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -990,9 +990,7 @@ def configure_apex(self, amp, model, optimizers, amp_level):
return model, optimizers
"""
model, optimizers = amp.initialize(
model, optimizers, opt_level=amp_level,
)
model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level)

return model, optimizers

Expand Down
11 changes: 0 additions & 11 deletions pytorch_lightning/core/model_saving.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/core/root_module.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/logging/comet_logger.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/logging/mlflow_logger.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/logging/test_tube_logger.py

This file was deleted.

12 changes: 0 additions & 12 deletions pytorch_lightning/overrides/override_data_parallel.py

This file was deleted.

9 changes: 0 additions & 9 deletions pytorch_lightning/pt_overrides/__init__.py

This file was deleted.

12 changes: 0 additions & 12 deletions pytorch_lightning/pt_overrides/override_data_parallel.py

This file was deleted.

9 changes: 0 additions & 9 deletions pytorch_lightning/root_module/__init__.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/root_module/decorators.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/root_module/grads.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/root_module/hooks.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/root_module/memory.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/root_module/model_saving.py

This file was deleted.

11 changes: 0 additions & 11 deletions pytorch_lightning/root_module/root_module.py

This file was deleted.

37 changes: 0 additions & 37 deletions pytorch_lightning/trainer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -428,13 +428,6 @@ def on_train_end(self, trainer, pl_module):
# default used by the Trainer
trainer = Trainer(gradient_clip_val=0.0)
gradient_clip:
.. warning:: .. deprecated:: 0.5.0
Use `gradient_clip_val` instead. Will remove 0.8.0.
log_gpu_memory
^^^^^^^^^^^^^^
Options:
Expand Down Expand Up @@ -495,12 +488,6 @@ def on_train_end(self, trainer, pl_module):
# default used by the Trainer
trainer = Trainer(max_epochs=1000)
max_nb_epochs:
.. warning:: .. deprecated:: 0.5.0
Use `max_epochs` instead. Will remove 0.8.0.
min_epochs
^^^^^^^^^^
Force training for at least these many epochs
Expand All @@ -510,11 +497,6 @@ def on_train_end(self, trainer, pl_module):
# default used by the Trainer
trainer = Trainer(min_epochs=1)
min_nb_epochs:
.. warning:: deprecated:: 0.5.0
Use `min_epochs` instead. Will remove 0.8.0.
max_steps
^^^^^^^^^
Stop training after this number of steps
Expand Down Expand Up @@ -559,12 +541,6 @@ def on_train_end(self, trainer, pl_module):
# to train on 8 nodes
trainer = Trainer(num_nodes=8)
nb_gpu_nodes:
.. warning:: .. deprecated:: 0.5.0
Use `num_nodes` instead. Will remove 0.8.0.
num_processes
^^^^^^^^^^^^^
Expand Down Expand Up @@ -595,12 +571,6 @@ def on_train_end(self, trainer, pl_module):
# turn it off
trainer = Trainer(num_sanity_val_steps=0)
nb_sanity_val_steps:
.. warning:: .. deprecated:: 0.5.0
Use `num_sanity_val_steps` instead. Will remove 0.8.0.
num_tpu_cores
^^^^^^^^^^^^^
.. warning:: .. deprecated:: 0.7.6
Expand Down Expand Up @@ -825,13 +795,6 @@ def on_train_end(self, trainer, pl_module):
# default used by the Trainer
trainer = Trainer(row_log_interval=10)
add_row_log_interval:
.. warning:: .. deprecated:: 0.5.0
Use `row_log_interval` instead. Will remove 0.8.0.
use_amp:
.. warning:: .. deprecated:: 0.7.0
Expand Down
9 changes: 3 additions & 6 deletions pytorch_lightning/trainer/auto_mix_precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,9 @@ class TrainerAMPMixin(ABC):
use_native_amp: bool

def init_amp(self, use_amp):
# TODO: remove in v 0.8.0
if self.use_native_amp:
rank_zero_warn("`amp_level` has been deprecated since v0.7.4 "
"(native amp does not require it)"
" and this argument will be removed in v0.8.0", DeprecationWarning)
rank_zero_warn("`amp_level` has been deprecated since v0.7.4 (native amp does not require it)"
" and this argument will be removed in v0.9.0", DeprecationWarning)

# Backward compatibility, TODO: remove in v0.9.0
if use_amp is not None:
Expand All @@ -38,13 +36,12 @@ def init_amp(self, use_amp):
log.info('Using 16bit precision.')
return

# TODO: remove all below for v0.8.0
# TODO: remove all below for v0.9.0
if use_amp and not APEX_AVAILABLE: # pragma: no-cover
raise ModuleNotFoundError("""
You set `use_amp=True` but do not have apex installed.
Install apex first using this guide and rerun with use_amp=True:
https://github.com/NVIDIA/apex#linux
this run will NOT use 16 bit precision
""")

Expand Down
Loading

0 comments on commit 2674976

Please sign in to comment.