Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add quantile loss metric #1559

Merged
merged 24 commits into from Mar 7, 2023

Conversation

JanFidor
Copy link
Contributor

@JanFidor JanFidor commented Feb 11, 2023

Fixes #1367

Summary

Add quantile loss metric with tests

I found an already existing implementation of quantile loss here:

errors = target.unsqueeze(-1) - model_output

and used it for refactoring. Substituting that implementation with the new metric function would definitely be in line with DRY, but for now it doesn't support CUDA, while the QuantileRegression does. I'm not certain whether it actually needs it or if it was just an implementation quirk, so I left it as is for now. I could add support for multiple tau values as a parameter and returning their mean loss.

Other Information

While we're at the topic of the QuantileRegression , while browsing it I stumbled upon this line:

return losses.sum(dim=dim_q).mean()

I'll be honest I've been looking at it for quite a while now and I just can't wrap my head around why it takes that sum before returning the mean. It would be penalizing the amount of quantiles the model optimizes for, which would kind of work as a regularization technique. Also if we have quantiles q and 1 - q we optimize for, they negate each other and give us 2 instances with quantile 0.5, so we could try to skip those unnecessary operations, I'm not certain if the speed-up would be meaningful though. Please let me know what you think

Copy link
Collaborator

@madtoinou madtoinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

I don't have enough experience to comment the line you highlighted in darts/darts/utils/likelihood_models.py, I'll try to think more about it and circle back to you.

darts/metrics/__init__.py Outdated Show resolved Hide resolved
darts/tests/metrics/test_metrics.py Outdated Show resolved Hide resolved
@JanFidor
Copy link
Contributor Author

In the last commit I used simple mean across all values instead of the previously used reduction param, because it wasn't passing

def helper_test_multivariate_duplication_equality(self, metric, **kwargs):

Copy link
Collaborator

@madtoinou madtoinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, the reduction argument is actually supposed to be used when reducing the metrics for multivariate ts (thanks to the decorator) and should not be used inside the metric itself.

darts/metrics/metrics.py Outdated Show resolved Hide resolved
@JanFidor
Copy link
Contributor Author

LGTM.

I don't have enough experience to comment the line you highlighted in darts/darts/utils/likelihood_models.py, I'll try to think more about it and circle back to you.

I mentioned it because I managed to stumble into it and it's somewhat connected to the PR. It's nothing crucial for now, so it might be a good idea to put together a separate issue for it to keep the PR small and focused on just the metric, let me know what you think

@codecov-commenter
Copy link

codecov-commenter commented Feb 15, 2023

Codecov Report

Patch coverage: 100.00% and project coverage change: -0.04 ⚠️

Comparison is base (7abb872) 94.09% compared to head (170e59f) 94.05%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1559      +/-   ##
==========================================
- Coverage   94.09%   94.05%   -0.04%     
==========================================
  Files         125      125              
  Lines       11185    11182       -3     
==========================================
- Hits        10524    10517       -7     
- Misses        661      665       +4     
Impacted Files Coverage Δ
darts/metrics/__init__.py 100.00% <ø> (ø)
darts/metrics/metrics.py 100.00% <100.00%> (ø)
darts/utils/data/tabularization.py 99.27% <0.00%> (-0.73%) ⬇️
darts/timeseries.py 92.28% <0.00%> (-0.22%) ⬇️
darts/ad/anomaly_model/filtering_am.py 91.93% <0.00%> (-0.13%) ⬇️
...arts/models/forecasting/torch_forecasting_model.py 89.88% <0.00%> (-0.04%) ⬇️
darts/models/forecasting/block_rnn_model.py 98.24% <0.00%> (-0.04%) ⬇️
darts/models/forecasting/nhits.py 99.27% <0.00%> (-0.01%) ⬇️
darts/datasets/__init__.py 100.00% <0.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@hrzn hrzn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, only small comments.

darts/metrics/metrics.py Outdated Show resolved Hide resolved
darts/metrics/metrics.py Outdated Show resolved Hide resolved
darts/metrics/metrics.py Outdated Show resolved Hide resolved
@hrzn
Copy link
Contributor

hrzn commented Feb 16, 2023

@JanFidor thanks for the PR!
Related to your comment about the other implementation in likelihood_models.py. This other implementation is in PyTorch (and supporting CUDA) because it's directly used as a loss function for training the neural networks models (when the likelihood is set to "quantile").
In the case of the metrics, it's better to keep a Numpy implementation only, as you did.

Regarding the operation

return losses.sum(dim=dim_q).mean()

I would have to dive into it a little more, but I think this is intended. We are first summing the losses over the last dimensions, so that the final loss for each sample is the sum of the losses for each quantile; note that when training the gradients will flow backward on this sum and be redistributed to the individual "quantile heads" of the neural network, in order to tune them to predict the individual quantiles correctly. The last mean() call is simply to take the mean over the batch dimension. So at first sight nothing there looks suspicious to me, but I might be missing something of course.

Finally, I think your averaging in Numpy over the time dimension is correct, as we want to return the mean quantile loss (over time).

@JanFidor
Copy link
Contributor Author

Thanks for the explanation, I wasn't looking at the big picture with back-prop, it makes sense now

@JanFidor
Copy link
Contributor Author

Hmmm, I think macos failure might be unrelated to the code as ubuntu passes, please let me know if there's something I should change or if the CI's flaky

@dennisbader dennisbader added this to In review in darts via automation Feb 28, 2023
Copy link
Contributor

@hrzn hrzn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

@JanFidor
Copy link
Contributor Author

JanFidor commented Mar 7, 2023

Thanks for the reviews @madtoinou @hrzn !

@madtoinou madtoinou merged commit b07aaa8 into unit8co:master Mar 7, 2023
darts automation moved this from In review to Done Mar 7, 2023
@dennisbader dennisbader moved this from Done to Released in darts May 23, 2023
alexcolpitts96 pushed a commit to alexcolpitts96/darts that referenced this pull request May 31, 2023
* first implementation of quantile loss

* add quantile loss to metrics ___init__ and tests

* refactor

* rename pinball loss to quantile loss

* black

* use reduction to aggregate losses and update docs

* black + isort

* rollback to simple mean instead of reduction param

* change overlooked copy-paste comment

* black enter

* docs changes

* flake8

---------

Co-authored-by: Julien Herzen <julien@unit8.co>
Co-authored-by: madtoinou <32447896+madtoinou@users.noreply.github.com>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
darts
Released
Development

Successfully merging this pull request may close these issues.

Add a pinball loss metric and deprecate rho_risk (?)
5 participants