-
Notifications
You must be signed in to change notification settings - Fork 43
Description
I am starting to use the loss functions in MLJBase that were migrated from MLJ recently in MLJBase 0.5.0. Thank you very much! :) I noticed that the terminology used in the project is "measure" instead of "loss", is there a reason for calling it that way?
Regardless of this naming issue, I would like to understand better the default_measure
function. I don't understand how a default loss is a function of the learning model, wouldn't it be just a function of the scitype of the model's output? Imagine that I am trying to compare different learning models consistently. If I end up solving a classification task with a probabilistic and deterministic classifier, these two will be assessed differently in a cross-validation procedure for example that uses the default_measure
when the user does not specify the loss explicitly.
I think my question is even more general. A loss function is formally defined as loss(y, f(x))
where f(x)
is the output of the model f
. So in terms of dispatch this function only sees the output of the model, not the model itself. Can we follow this convention instead and rewrite the losses to be a function of the scitype only?