Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat] Scaled Distributions #345

Merged
merged 11 commits into from
Dec 1, 2022
Merged

[Feat] Scaled Distributions #345

merged 11 commits into from
Dec 1, 2022

Conversation

kdgutier
Copy link
Collaborator

@kdgutier kdgutier commented Dec 1, 2022

This PR has no implications for point forecasting methods.
The PR incorporates a reinterpretation of DeepAR's training strategy that scales the DistributionLoss.

  • Added identity possibility to TemporalScaler class, to simplify train/validation/predict_steps from Base classes
  • Simplified code with identity scaler, eliminated if scaler is None conditions in train/validation/predict_steps.
  • Changed BaseRecurrent and BaseWindows train/validation/predict_steps to accept Scaled distribution.
  • Changed model notebooks to perform qualitative tests of the DistributionLoss.

PENDING:

  • Add mean_scaler with shift=0 and scale=mean(x) to Temporal scalers.
  • Add outsample_y recovery unit test with _inv_normalization method (Poisson non-negativity failing).
  • Change names from loc -> shift to homogeneize scalers and Scaled Distributions.
  • Change back scaler_type default to None.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@kdgutier kdgutier linked an issue Dec 1, 2022 that may be closed by this pull request
@kdgutier kdgutier merged commit 4e7ff5a into main Dec 1, 2022
@kdgutier kdgutier deleted the ScaledDistributions branch December 1, 2022 16:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Scale-dependent DistributionLoss
2 participants