-
Notifications
You must be signed in to change notification settings - Fork 47
Update EvalML section of tutorials #198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Note - I did get slightly different results from the original tut around feature importance, and prediction results. But, unless I missed a new randomness param, I think this may be because of underlying changes in the evalml package. Not sure though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Lets clear the outputs in the notebooks so the build process can automatically run the notebooks.
wood work is a core dependency of evalml
this is so the build process can automatically run the notebooks
Codecov Report
@@ Coverage Diff @@
## main #198 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 28 28
Lines 1259 1259
=========================================
Hits 1259 1259 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! I left one minor comment. Other than that, we'll be ready to merge after updating with main.
happy to squash or rebase if y'all do that, too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixes #197 |
Ok, I think we're good. I'm assuming you'll merge it, as I don't seem to be able to. Thanks |
Thanks @flowersw ! Yes, the difference in output you saw is due to updates we've made to EvalML, including adding more models and fixing bugs. |
This simply updates the evalml section of the tutorial notebooks in the documentation, to be up to date with evalml 0.17, and pins evalml and woodwork higher and gte in the docs requirements.txt file.
I did want to note that in case these tutorials are benchmark or meant to be exactly reproducible in some way, I noticed a few slight differences in the output of the models, e.g. slightly different evaluation accuracy, slightly different feature importances, etc.. Unless I'm missing a new "randomness" parameter, I would guess that's due to the underlying changes in evalml, but am not sure.