Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates for Existing Demos #154

Merged
merged 109 commits into from Aug 20, 2020
Merged

Updates for Existing Demos #154

merged 109 commits into from Aug 20, 2020

Conversation

jeff-hernandez
Copy link
Collaborator

@jeff-hernandez jeff-hernandez commented Jul 22, 2020

Closes #143 by updating notebook examples.

@codecov
Copy link

codecov bot commented Jul 22, 2020

Codecov Report

Merging #154 into main will not change coverage.
The diff coverage is 100.00%.

@@            Coverage Diff            @@
##              main      #154   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           28        28           
  Lines         1253      1254    +1     
=========================================
+ Hits          1253      1254    +1     
Impacted Files Coverage Δ
composeml/label_times/plots.py 100.00% <100.00%> (ø)

@jeff-hernandez jeff-hernandez changed the base branch from master to main July 29, 2020 18:21
@jeff-hernandez jeff-hernandez marked this pull request as ready for review August 12, 2020 19:05
"source": [
"### Next Steps\n",
"\n",
"At this point, we have completed the machine learning application. We can revisit each step to explore and fine-tune with different parameters until the model is ready for deployment."
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeff-hernandez these two examples you've added are great!! Its really exciting to have evalml included here 😁 👏

I had one suggestion: you could add something at the end like this

For more information on how to work with the models produced by EvalML, take a look at the EvalML documentation.

I know you already linked to the evalml docs at the top, but it could be nice to call this out for people who skim to the bottom and are left wondering what else they can do.

"source": [
"best_pipeline = automl.best_pipeline.fit(X_train, y_train)\n",
"score = best_pipeline.score(X_holdout, y_holdout, objectives=['f1'])\n",
"dict(score)"
Copy link
Contributor

@kmax12 kmax12 Aug 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe also show the top features using permutation performance or the prediction explanation functionality, just to show off evalml some more. dont forget to update evalml to right minimum version if you do!

this could be added in a future PR tho

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The graph_feature_importance doesn't render for me in jupyter lab, but I can use the data frame for feature_importance to create a plot.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might render fine in the docs even if the jupyter doesnt work. this is also a known issue

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't render in the docs when I build locally.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh well. okay

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeff-hernandez @kmax12 you're probably running into this: alteryx/evalml#1040

@jeff-hernandez , do you have the ipywidgets pip package installed locally? I see you're listing evalml>=0.11.2 in the requirements, which looks fine--that should include ipywidgets, and the graph should show up... perhaps you need to rerun pip install locally? Ping the team if you'd like some help with that.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh also @jeff-hernandez : @freddyaboulton ran into this issue locally in jupyterlab, where he had to install the jupyter lab extension in order to get this working. If you use jupyterlab as opposed to jupyter, you'll probably also need to do this.

@jeff-hernandez jeff-hernandez changed the base branch from main to master August 18, 2020 15:02
@jeff-hernandez jeff-hernandez changed the base branch from master to main August 18, 2020 15:02
@jeff-hernandez jeff-hernandez changed the title Updates Demos Updates for Existing Demos Aug 18, 2020
@rwedge
Copy link
Contributor

rwedge commented Aug 19, 2020

The "using label transforms" notebook now has the outputs of each cell already as opposed to letting the docs run the notebook to get the output. Is that intentional?

@jeff-hernandez
Copy link
Collaborator Author

@rwedge no, I'll clear the output. I want the docs to run that notebook as well.

Copy link
Contributor

@rwedge rwedge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Long Runtime in Notebook Examples
4 participants