New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eval metrix calculated with model_performance() differ from those with caret::confusionMatrix() #250
Comments
Hi, |
@maksymiuks can we take care about this in the default predict function? |
It is, but the solution would require modification of the input model (adding an attribute in explainer function) |
Why not in |
imho either explainer or predict function should know which label is positive |
Let's focus on it after DALEX 2.0.0 release. |
Solved in #353 |
I'd like to compare a decision tree with a random forest, So I first trained a decision tree and calculated some evaluation measures on a test set using
caret::confusionMatrix()
. I did the same using theDALEX
package. Although the trees and predictions are the same, the metrics (precision, recall, F1) calculated bymodel_performance()
differ from those calculated withcaret::confusionMatrix()
. Why is this? Am I doing something wrong?See: https://rpubs.com/friesewoudloper/630778
The text was updated successfully, but these errors were encountered: