Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT] DistributionLoss #339

Merged
merged 6 commits into from
Nov 24, 2022
Merged

[FEAT] DistributionLoss #339

merged 6 commits into from
Nov 24, 2022

Conversation

kdgutier
Copy link
Collaborator

@kdgutier kdgutier commented Nov 24, 2022

This PR has minimal implications to NeuralForecast models.

  • Added a new domain_map method for all the PyTorch train losses.
  • Added call to domain_map at the end of all the models' forward.
  • Deprecated if condition on the dimension of the univariate losses in favor of a domain_map squeezing automatically the output last dimension.
  • Changed the BaseRecurrent normalize and invert_normalize methods to operate with new squeezed univariate outputs.
  • Added DistributionLoss, with Poisson, Normal and StudentT options.
  • Validated minimal effects on all the models.

PENDING:

  • Use the AffineTransformation on the loss function to anchor the predictions scale and have the loss function operate in the data's "original" scale.
  • Add a circle-ci test for quantilic predictions.
  • Change PMM and GMM for DistributionLoss operation compatibility.
  • Change Normal, and StudentT loss to initialize or wrangle the scale parameter so that it initializes overestimating it, that will smooth the optimization process (maybe clip gradients also).

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants