Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try avoid saving PTL trainer (and redefine it in fit() and predict() calls) #1363

Closed
hrzn opened this issue Nov 15, 2022 · 4 comments · Fixed by #1371
Closed

Try avoid saving PTL trainer (and redefine it in fit() and predict() calls) #1363

hrzn opened this issue Nov 15, 2022 · 4 comments · Fixed by #1371
Projects

Comments

@hrzn
Copy link
Contributor

hrzn commented Nov 15, 2022

No description provided.

@hrzn hrzn created this issue from a note in darts (To do) Nov 15, 2022
@alexcolpitts96
Copy link
Contributor

TorchForecastingModel._setup_trainer() could be adjusted to not save the trainer to self but rather return a Trainer instance.

I have found that I use different trainer args for training versus inference. That being said, it is possible on model initialization to save the trainer args and create a trainer for fit() and predict() based on these with the option to override when calling fit() or predict().

@hrzn
Copy link
Contributor Author

hrzn commented Dec 6, 2022

Ping @dennisbader

@madtoinou
Copy link
Collaborator

I think that this issue is solved by #1371.

@dennisbader
Copy link
Collaborator

Yes, this will be solved with #1371.

@alexcolpitts96, the PR adjusts TorchForecastingModel._setup_trainer() but we cannot avoid storing the trainer as an attribute in the model object due to week reference errors (the trainer would get dropped after fit/predict calls if we don't store it -> PyTorch Lightning will lsoe the reference to the trainer).

darts automation moved this from To do to Done Dec 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
darts
Done
Development

Successfully merging a pull request may close this issue.

4 participants