Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Mean Absolute Percentage Error #248

Merged
merged 45 commits into from Jun 10, 2021

Conversation

pranjaldatta
Copy link
Contributor

@pranjaldatta pranjaldatta commented May 14, 2021

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Fixes #235.

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@pep8speaks
Copy link

pep8speaks commented May 14, 2021

Hello @pranjaldatta! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-06-09 23:23:39 UTC

@codecov
Copy link

codecov bot commented May 14, 2021

Codecov Report

Merging #248 (65e09f0) into master (99da3ac) will increase coverage by 0.03%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #248      +/-   ##
==========================================
+ Coverage   96.75%   96.78%   +0.03%     
==========================================
  Files          92       94       +2     
  Lines        3053     3084      +31     
==========================================
+ Hits         2954     2985      +31     
  Misses         99       99              
Flag Coverage Δ
Linux 78.97% <97.72%> (+0.18%) ⬆️
Windows 78.97% <97.72%> (+0.18%) ⬆️
cpu 96.72% <100.00%> (+0.03%) ⬆️
gpu 96.75% <100.00%> (+0.03%) ⬆️
macOS 96.72% <100.00%> (+0.03%) ⬆️
pytest 96.78% <100.00%> (+0.03%) ⬆️
python3.6 95.71% <100.00%> (+0.04%) ⬆️
python3.8 96.72% <100.00%> (+0.03%) ⬆️
python3.9 96.62% <100.00%> (+0.03%) ⬆️
torch1.3.1 95.71% <100.00%> (+0.04%) ⬆️
torch1.4.0 95.81% <100.00%> (+0.04%) ⬆️
torch1.8.1 96.62% <100.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
torchmetrics/__init__.py 100.00% <ø> (ø)
torchmetrics/functional/__init__.py 100.00% <100.00%> (ø)
torchmetrics/functional/regression/__init__.py 100.00% <100.00%> (ø)
...ional/regression/mean_absolute_percentage_error.py 100.00% <100.00%> (ø)
...trics/functional/regression/mean_relative_error.py 100.00% <100.00%> (ø)
torchmetrics/regression/__init__.py 100.00% <100.00%> (ø)
...trics/regression/mean_absolute_percentage_error.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 99da3ac...65e09f0. Read the comment docs.

@pranjaldatta
Copy link
Contributor Author

pranjaldatta commented May 14, 2021

Hey @SkafteNicki, I was working on #235. I just added the source code. But I started running into an issue with pytest and Ic couldn't figure out what the issue was. (Please ignore the docs/formatting test fails, I will fix them before the final PR ! )

So the issue is,

Issue

When I run $ pytest tests/regression/test_mean_error.py -v

================================= test session starts =================================
platform linux -- Python 3.7.1, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /home/pranjal/miniconda3/envs/generalPT/bin/python
cachedir: .pytest_cache
rootdir: /home/pranjal/OSS/metrics, configfile: setup.cfg
collected 68 items                                                                    

tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredError-mean_squared_error-mean_squared_error-preds0-target0-_single_target_sk_metric] PASSED [  1%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredError-mean_squared_error-mean_squared_error-preds1-target1-_multi_target_sk_metric] PASSED [  2%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanAbsoluteError-mean_absolute_error-mean_absolute_error-preds0-target0-_single_target_sk_metric] PASSED [  4%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanAbsoluteError-mean_absolute_error-mean_absolute_error-preds1-target1-_multi_target_sk_metric] PASSED [  5%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredLogError-mean_squared_log_error-mean_squared_log_error-preds0-target0-_single_target_sk_metric] PASSED [  7%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredLogError-mean_squared_log_error-mean_squared_log_error-preds1-target1-_multi_target_sk_metric] PASSED [  8%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanAbsolutePercentageError-mean_absolute_percentage_error-mean_absolute_percentage_error-preds0-target0-_single_target_sk_metric] 

It stalls at the mean_absolute_percentage_error test and I have to Keyboardnterrupt to exit the stall. I have no clue why this happening. I would really appreciate any help or pointers on how to solve this issue!

Implementation verification

You can verify that the metric is implemented correctly and gives comparable results to scikit learn by running the following script from project root,

import torch
import torchmetrics.functional.regression.mean_absolute_percentage_error as mape
from torchmetrics.regression import MeanAbsolutePercentageError as MAPE
from sklearn.metrics import mean_absolute_percentage_error as sk_mape

y_true = torch.tensor([1, 10, 1e6])
y_pred = torch.tensor([0.9, 15, 1.2e6])
print(f"pred: {y_pred}")
print(f"Y: {y_true}")

reg_mape = MAPE()

print("-------Functional Test-------")
print("Func_ans: ", mape(y_pred, y_true))
print("Module_ans: ", reg_mape(y_pred, y_true))
print("sk_ans: ",sk_mape(y_true, y_pred))

print("--------------------")

@SkafteNicki SkafteNicki changed the title initial code, test failing Add Mean Absolute Procentage Error May 14, 2021
@SkafteNicki SkafteNicki added the enhancement New feature or request label May 14, 2021
@SkafteNicki SkafteNicki added this to the v0.4 milestone May 14, 2021
Copy link
Member

@SkafteNicki SkafteNicki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @pranjaldatta,
PR is looking really good. I am going to investigate the failing test one of the next days but here are some comments. In addition, please add the following:

  • References in docs
  • Entry in changelog

@pranjaldatta
Copy link
Contributor Author

Hi @pranjaldatta,
PR is looking really good. I am going to investigate the failing test one of the next days but here are some comments. In addition, please add the following:

  • References in docs
  • Entry in changelog

Sure, I'll resolve them right away!

@pranjaldatta pranjaldatta changed the title Add Mean Absolute Procentage Error Add Mean Absolute Percentage Error May 15, 2021
@SkafteNicki
Copy link
Member

Hi @pranjaldatta,
I have now fixed the test. The problem was with the epsilon parameter not being correctly transferred to the different devices. I have removed it as an user argument, since it is more of a numerical trick than a parameter that affects the metric.
Please take a look at the code and check that I have not completely butcher it :]

The PR looks done to me and is ready to be merged, but if you want to work more on it you can always think about adding the multioutput argument from the sklearn implementation.

@pranjaldatta
Copy link
Contributor Author

Hi @pranjaldatta,
I have now fixed the test. The problem was with the epsilon parameter not being correctly transferred to the different devices. I have removed it as an user argument, since it is more of a numerical trick than a parameter that affects the metric.
Please take a look at the code and check that I have not completely butcher it :]

The PR looks done to me and is ready to be merged, but if you want to work more on it you can always think about adding the multioutput argument from the sklearn implementation.

Hey @SkafteNicki , thank you so much for your help! It works fine now!

Regarding multioutput, wouldn't it break consistency somewhat, since other metrics don't offer this feature as well?

@SkafteNicki
Copy link
Member

@pranjaldatta then lets not implement multioutput for now. We can always revisit if someone request the change :]
Going to mark this as ready for review.

@SkafteNicki SkafteNicki marked this pull request as ready for review May 18, 2021 09:39
@Borda Borda self-requested a review June 9, 2021 07:05
auto-merge was automatically disabled June 9, 2021 14:54

Head branch was pushed to by a user without write access

@pranjaldatta
Copy link
Contributor Author

Hey @Borda, I have added the deprecation warning, I hope it's okay!

@Borda Borda enabled auto-merge (squash) June 9, 2021 23:23
@Borda
Copy link
Member

Borda commented Jun 9, 2021

@pranjaldatta thank you for your patience! 🐰

@pranjaldatta
Copy link
Contributor Author

@pranjaldatta thank you for your patience!

Always a pleasure to contribute to this community in whatever small way possible!

@Borda Borda merged commit 90743f0 into Lightning-AI:master Jun 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Mean Absolute Percentage Error metric
5 participants