-
-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set_threshold sometimes returns NA response for thresholds equal to 0. #452
Comments
This is somewhat undefined behavior here. The learner says probability 0, then this gets re-scaled by multiplication with 1/threshold which is Inf for threshold == 0. Is this now a very good label to predict or a very bad label to predict? |
I understand. I guess we need to decide and document what happens here, but returning |
we had exactly the same issue(s) in mlr2. these edgecases really need to be solved for robust applicability and ROC. it is also not that undefined? it is simply a tie? and we need/have tiehandling anyway? |
no, sorry, i didnt read correctly. case 1: case 2: i think the most reasonable think is to define this now as agreed? |
Agreed. Documented and tested now. |
This happens in case predicted probabilities of
0
occur together with thresholds0
.(1 / 0) * 0
->NaN
->max.col
returnsNA
.This leads to problems when we e.g. tune over thresholds, as this then breaks the prediction's
$score()
method.The text was updated successfully, but these errors were encountered: