You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make it possible to exclude "null" class labels from the computation of metrics.
Motivation
Given the functionality to create a "null" class in the class config and to exclude it from the loss using ignore_class_index in the solver config, I think I would have expected the computation of metrics during evaluation to also exclude the null class, or at least to provide a keyword argument in the solver / learner config or call to train that allows me to specify explicitly that I want to ignore any null labels outright (treating them as unlabelled for the purposes of evaluation). Currently, the precision gets computed as if the null class were valid labels, so the precision (and therefore f1 scores) look much lower than one would expect if one were to only evaluate non-null labelled pixels.
Pitch
Excluding null labels from model evaluation is a nice way to ensure you're seeing the values for metrics as you would intuitively expect, rather than having to correct them. This shouldn't be too difficult to implement.
I'd propose placing an option to ignore null class labels in the solver config, or to have the default behavior be to ignore null class labels for metrics evaluation if ignore_class_index is specified.
Alternatives
Specifying an external loss definition that computes these metrics in the manner suggested.
Additional context
The text was updated successfully, but these errors were encountered:
chrisjkuch
changed the title
Make it possible to exclude "null" class labels from the computation of metrics.
Make it possible to exclude "null" class labels from the computation of metrics
Nov 1, 2023
Thanks for the suggestion. This makes sense to me. I suppose we can just drop the row and column of the ignored class from the confusion matrix before computing the metrics.
馃殌 Feature
Make it possible to exclude "null" class labels from the computation of metrics.
Motivation
Given the functionality to create a "null" class in the class config and to exclude it from the loss using
ignore_class_index
in the solver config, I think I would have expected the computation of metrics during evaluation to also exclude the null class, or at least to provide a keyword argument in the solver / learner config or call to train that allows me to specify explicitly that I want to ignore any null labels outright (treating them as unlabelled for the purposes of evaluation). Currently, the precision gets computed as if the null class were valid labels, so the precision (and therefore f1 scores) look much lower than one would expect if one were to only evaluate non-null labelled pixels.Pitch
Excluding null labels from model evaluation is a nice way to ensure you're seeing the values for metrics as you would intuitively expect, rather than having to correct them. This shouldn't be too difficult to implement.
I'd propose placing an option to ignore null class labels in the solver config, or to have the default behavior be to ignore null class labels for metrics evaluation if
ignore_class_index
is specified.Alternatives
Specifying an external loss definition that computes these metrics in the manner suggested.
Additional context
The text was updated successfully, but these errors were encountered: