-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New metrics #100
New metrics #100
Conversation
Codecov Report
@@ Coverage Diff @@
## master #100 +/- ##
==========================================
+ Coverage 76.94% 77.41% +0.47%
==========================================
Files 102 106 +4
Lines 4746 4836 +90
==========================================
+ Hits 3652 3744 +92
+ Misses 1094 1092 -2
Continue to review full report at Codecov.
|
elegy/metrics/f1_test.py
Outdated
def test_cummulative(self): | ||
em = elegy.metrics.F1(threshold=0.3) | ||
# 1st run | ||
y_true = jnp.array([0, 1, 1, 1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should try to move to using random arrays and checking against the TF implementation to have stronger guarantees.
This is very good in general but tests should be generalized to random arrays and tested against the TF implementation. You can also test that |
@cgarciae I am having issues adding tensorflow_addons to the toml. I need the library to contrast elegy f1 against tfa f1 |
@anvelezec try updating poetry: poetry self update --version 1.1.4 |
Salut,
Activities in this pull request:
F1 score
F1 score test cases
MAPE as metric
MAPE test cases
MAPE functions Refactor due to name incompativility