New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updates for Existing Demos #154
Conversation
Codecov Report
@@ Coverage Diff @@
## main #154 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 28 28
Lines 1253 1254 +1
=========================================
+ Hits 1253 1254 +1
|
"source": [ | ||
"### Next Steps\n", | ||
"\n", | ||
"At this point, we have completed the machine learning application. We can revisit each step to explore and fine-tune with different parameters until the model is ready for deployment." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeff-hernandez these two examples you've added are great!! Its really exciting to have evalml included here 😁 👏
I had one suggestion: you could add something at the end like this
For more information on how to work with the models produced by EvalML, take a look at the EvalML documentation.
I know you already linked to the evalml docs at the top, but it could be nice to call this out for people who skim to the bottom and are left wondering what else they can do.
"source": [ | ||
"best_pipeline = automl.best_pipeline.fit(X_train, y_train)\n", | ||
"score = best_pipeline.score(X_holdout, y_holdout, objectives=['f1'])\n", | ||
"dict(score)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe also show the top features using permutation performance or the prediction explanation functionality, just to show off evalml some more. dont forget to update evalml to right minimum version if you do!
this could be added in a future PR tho
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The graph_feature_importance
doesn't render for me in jupyter lab, but I can use the data frame for feature_importance
to create a plot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might render fine in the docs even if the jupyter doesnt work. this is also a known issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't render in the docs when I build locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh well. okay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeff-hernandez @kmax12 you're probably running into this: alteryx/evalml#1040
@jeff-hernandez , do you have the ipywidgets
pip package installed locally? I see you're listing evalml>=0.11.2
in the requirements, which looks fine--that should include ipywidgets
, and the graph should show up... perhaps you need to rerun pip install locally? Ping the team if you'd like some help with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh also @jeff-hernandez : @freddyaboulton ran into this issue locally in jupyterlab, where he had to install the jupyter lab extension in order to get this working. If you use jupyterlab as opposed to jupyter, you'll probably also need to do this.
The "using label transforms" notebook now has the outputs of each cell already as opposed to letting the docs run the notebook to get the output. Is that intentional? |
@rwedge no, I'll clear the output. I want the docs to run that notebook as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
Closes #143 by updating notebook examples.