Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New metrics #100

Merged
merged 21 commits into from
Dec 10, 2020
Merged

New metrics #100

merged 21 commits into from
Dec 10, 2020

Conversation

anvelezec
Copy link
Collaborator

@anvelezec anvelezec commented Oct 15, 2020

Salut,

Activities in this pull request:

  • F1 score

  • F1 score test cases

  • MAPE as metric

  • MAPE test cases

  • MAPE functions Refactor due to name incompativility

@codecov-io
Copy link

codecov-io commented Oct 26, 2020

Codecov Report

Merging #100 (27b7c93) into master (c7606f9) will increase coverage by 0.47%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #100      +/-   ##
==========================================
+ Coverage   76.94%   77.41%   +0.47%     
==========================================
  Files         102      106       +4     
  Lines        4746     4836      +90     
==========================================
+ Hits         3652     3744      +92     
+ Misses       1094     1092       -2     
Impacted Files Coverage Δ
elegy/losses/__init__.py 100.00% <ø> (ø)
...legy/losses/mean_squared_logarithmic_error_test.py 94.87% <ø> (ø)
elegy/losses/mean_absolute_percentage_error.py 100.00% <100.00%> (ø)
...legy/losses/mean_absolute_percentage_error_test.py 100.00% <100.00%> (ø)
elegy/losses/mean_squared_logarithmic_error.py 100.00% <100.00%> (ø)
elegy/metrics/__init__.py 100.00% <100.00%> (ø)
elegy/metrics/f1.py 100.00% <100.00%> (ø)
elegy/metrics/f1_test.py 100.00% <100.00%> (ø)
elegy/metrics/mean_absolute_error_test.py 100.00% <100.00%> (ø)
elegy/metrics/mean_absolute_percentage_error.py 100.00% <100.00%> (ø)
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c7606f9...27b7c93. Read the comment docs.

def test_cummulative(self):
em = elegy.metrics.F1(threshold=0.3)
# 1st run
y_true = jnp.array([0, 1, 1, 1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should try to move to using random arrays and checking against the TF implementation to have stronger guarantees.

@cgarciae
Copy link
Collaborator

This is very good in general but tests should be generalized to random arrays and tested against the TF implementation. You can also test that sample_weights and reduction are well behaved.

@anvelezec
Copy link
Collaborator Author

anvelezec commented Nov 21, 2020

@cgarciae I am having issues adding tensorflow_addons to the toml. I need the library to contrast elegy f1 against tfa f1

image

image

@cgarciae
Copy link
Collaborator

@anvelezec try updating poetry:

poetry self update --version 1.1.4

@cgarciae cgarciae merged commit 65e807c into poets-ai:master Dec 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants