You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, any newly trained classifier is only accepted if the overall accuracy has improved at least 1% over the source's previous classifier (when both classifiers are evaluated on the latest training data).
However, there may be cases where the source owner prefers the newly trained classifier even if the accuracy didn't improve by 1%. For example, maybe the new classifier incorporates significantly more data for rare taxa, but since the accuracy is mostly influenced by common taxa, the accuracy wasn't affected much.
We can offer more flexibility by having option(s) for the acceptance threshold:
Be able to tweak the accuracy improvement threshold to another number - 0.5%, 2%, etc.
Have no accuracy improvement threshold at all, so any classifier is accepted
Remember, there's still a requirement to have 10% more confirmed images than the last training, so this issue's proposal alone won't really put us in danger of unreasonable numbers of training requests. (That's more the domain of issue #410 )
There's also the opposite idea of accepting no newly trained classifiers, essentially having an infinitely high threshold. However, it makes more sense to prevent new trainings rather than let the trainings run and not use them. So that goes in a separate issue I think.
The text was updated successfully, but these errors were encountered:
Currently, any newly trained classifier is only accepted if the overall accuracy has improved at least 1% over the source's previous classifier (when both classifiers are evaluated on the latest training data).
However, there may be cases where the source owner prefers the newly trained classifier even if the accuracy didn't improve by 1%. For example, maybe the new classifier incorporates significantly more data for rare taxa, but since the accuracy is mostly influenced by common taxa, the accuracy wasn't affected much.
We can offer more flexibility by having option(s) for the acceptance threshold:
Remember, there's still a requirement to have 10% more confirmed images than the last training, so this issue's proposal alone won't really put us in danger of unreasonable numbers of training requests. (That's more the domain of issue #410 )
There's also the opposite idea of accepting no newly trained classifiers, essentially having an infinitely high threshold. However, it makes more sense to prevent new trainings rather than let the trainings run and not use them. So that goes in a separate issue I think.
The text was updated successfully, but these errors were encountered: