Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/ptl trainer handling #1371

Merged
merged 11 commits into from Dec 10, 2022
Merged

Fix/ptl trainer handling #1371

merged 11 commits into from Dec 10, 2022

Conversation

dennisbader
Copy link
Collaborator

@dennisbader dennisbader commented Nov 19, 2022

Fixes #1363

Summary (Edited)

Various improvements to TorchForecastingModel

  • now each time fit/predict is called, the models create a new trainer object (or use the user supplied trainer).
    • with this we avoid getting unexpected behavior after loading a model, errors where trainer is not attach to the model, ...
    • with this, parameters such as verbose in fit/predict always get added to the trainer
  • remove saving the PyTorch module twice
    • we don't save model.model and model.trainer anymore by adapting what gets pickled
      • only saves the "empty" base TorchForecastingModel as pickle
      • saves only the necessary parameters for LightningModule (weights, hyperparameters, ...) instead of pickling
  • allows to map the model to another location upon loading with map_location and model.to_cpu()
    • As we don't have a GPU in test env, couldn't add unittest for saving on GPU and loading to CPU
  • Edit: tested the following for all TorchForecastingModels:
    • train & save model on GPU (colab), load to CPU (colab) and train & predict
    • train & save model on GPU (colab), load to CPU (local, different computer) and train & predict
      -> Loading to CPU and predicting gives identical results for colab and local
      -> Loading to CPU and predicting gives results with negligible difference compared on GPU (diff <= +/- 1e-16)

Additional Info

@madtoinou, regarding our discussions, we can't avoid to store the Trainer object as an attribute in TorchForecastingModel, as otherwise the reference gets lost (LightnignModule just points to trainer via a property).

@codecov-commenter
Copy link

codecov-commenter commented Nov 19, 2022

Codecov Report

Base: 93.58% // Head: 93.67% // Increases project coverage by +0.08% 🎉

Coverage data is based on head (7cbbcbd) compared to base (d6a74c0).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1371      +/-   ##
==========================================
+ Coverage   93.58%   93.67%   +0.08%     
==========================================
  Files          94       94              
  Lines        9390     9385       -5     
==========================================
+ Hits         8788     8791       +3     
+ Misses        602      594       -8     
Impacted Files Coverage Δ
darts/models/forecasting/tft_model.py 97.54% <100.00%> (+0.01%) ⬆️
...arts/models/forecasting/torch_forecasting_model.py 89.50% <100.00%> (+1.77%) ⬆️
darts/timeseries.py 91.71% <0.00%> (-0.06%) ⬇️
darts/models/forecasting/block_rnn_model.py 98.24% <0.00%> (-0.04%) ⬇️
darts/models/forecasting/nhits.py 99.27% <0.00%> (-0.01%) ⬇️
darts/datasets/__init__.py 100.00% <0.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@hrzn hrzn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I only added very minor comments (and one question regarding the need to redefine dunder methods in ForecastingModel.

@dennisbader dennisbader merged commit cdf2b44 into master Dec 10, 2022
darts automation moved this from In review to Done Dec 10, 2022
@dennisbader dennisbader deleted the fix/ptl_trainer_handling branch December 10, 2022 13:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
darts
Done
Development

Successfully merging this pull request may close these issues.

Try avoid saving PTL trainer (and redefine it in fit() and predict() calls)
4 participants