[scala] Make accuracy idependant of output size (fix #8226) #8297
Conversation
Thanks @benqua , I think a better way is to change |
@Javelinjs , you're right, it changes the definition of accuracy for output.size > 1. This change provides a definition of accuracy that match the one from wikipedia for binary classification, that says: It seems weird (at least to me :) ) that the accuracy depends on the output dimension and can grow to very large numbers. By dividing by the label dimension, we keep the accuracy between 0 and 1, which is the expected range of a "proportion". If we change sumMetric to Double, should we do it only for the value stored internally and keep float in the EvalMetric API? |
We should keep it the same as other language bindings, especially python. What if we make it Double in EvalMetric API? |
It can be done, but it would break the API for people calling EvalMetric.get (https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/EvalMetric.scala#L52) . |
We can convert it back to Float when calling |
@Javelinjs ok, I updated the PR as you suggested. |
Ho, it seems there is a build issue. Not sure it is related to my code (something about the windows gpu build), but I will check this week-end. |
LGTM. |
@benqua could you rebase the pr and re-trigger the CI build? |
Rebased. Waiting for the CI to complete. |
@piiswrong Could you help to do a force merge? |
I tried again, still no luck. |
Sorry to see that. Please rebase again, I think the CI is OK now. |
done. let's see... |
It fails with:
will investigate when I find time. idea, help welcome :) |
98c2b13
to
28dc2ee
Compare
b19ad48
to
0873ecb
Compare
I've encountered the same error with test_operator_gpu.test_svmoutput_with_type on my setup based on the release brach v0.12.0. Please check our internal wiki, @KellenSunderland @mbaijal @larroy |
https://builds.apache.org/blue/organizations/jenkins/incubator-mxnet/detail/PR-8297/13/pipeline line 452
That’s clearly not a CI issue |
When the difference in magnitude between the total accuracy and 1 becomes too big and accuracy is not updated anymore due to the low precision of float numbers.
@Javelinjs , Finally, the PR passes all tests. |
Thanks. |
…he#8297) When the difference in magnitude between the total accuracy and 1 becomes too big and accuracy is not updated anymore due to the low precision of float numbers.
…he#8297) When the difference in magnitude between the total accuracy and 1 becomes too big and accuracy is not updated anymore due to the low precision of float numbers.
…he#8297) When the difference in magnitude between the total accuracy and 1 becomes too big and accuracy is not updated anymore due to the low precision of float numbers.
…he#8297) When the difference in magnitude between the total accuracy and 1 becomes too big and accuracy is not updated anymore due to the low precision of float numbers.
Description
This PR change EvalMetric.sumMetric from Float to Double.
Fix #8226