Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add reminder about validation curves' dependency on generalization performance metric #458

Open
ArturoAmorQ opened this issue Sep 14, 2021 · 0 comments
Milestone

Comments

@ArturoAmorQ
Copy link
Collaborator

ArturoAmorQ commented Sep 14, 2021

In the Overfit-generalization-underfit notebook we present the validation curve of a DecisionTreeRegressor() using the Mean absolute error to score the model.
This could be a good opportunity to remind people that scores and errors cover different ranges of values and, therefore, training and testing curves can swap their relative positions depending on the evaluation method. I think this could partially help clarifying the notion of good vs. bad fit.

@ArturoAmorQ ArturoAmorQ changed the title Add reminder on validation curves dependancy on scoring method Add reminder on validation curves dependency on scoring method Sep 14, 2021
@ArturoAmorQ ArturoAmorQ changed the title Add reminder on validation curves dependency on scoring method Add reminder about validation curves' dependency on generalization performance metric Sep 16, 2021
@ArturoAmorQ ArturoAmorQ added this to the MOOC 3.0 milestone Jan 25, 2022
@lesteve lesteve modified the milestones: MOOC 3.0, MOOC 4.0 Oct 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants