Skip to content

Update EvalML section of tutorials #198

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Jan 22, 2021
Merged

Conversation

flowersw
Copy link
Contributor

This simply updates the evalml section of the tutorial notebooks in the documentation, to be up to date with evalml 0.17, and pins evalml and woodwork higher and gte in the docs requirements.txt file.

I did want to note that in case these tutorials are benchmark or meant to be exactly reproducible in some way, I noticed a few slight differences in the output of the models, e.g. slightly different evaluation accuracy, slightly different feature importances, etc.. Unless I'm missing a new "randomness" parameter, I would guess that's due to the underlying changes in evalml, but am not sure.

    Note - I did get slightly different results from the original tut
    around feature importance, and prediction results. But, unless I
    missed a new randomness param, I think this may be because of
    underlying changes in the evalml package. Not sure though.
@CLAassistant
Copy link

CLAassistant commented Jan 17, 2021

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@jeff-hernandez jeff-hernandez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! Lets clear the outputs in the notebooks so the build process can automatically run the notebooks.

    wood work is a core dependency of evalml
    this is so the build process can automatically run the notebooks
@codecov
Copy link

codecov bot commented Jan 20, 2021

Codecov Report

Merging #198 (d51a397) into main (4003974) will not change coverage.
The diff coverage is n/a.

@@            Coverage Diff            @@
##              main      #198   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           28        28           
  Lines         1259      1259           
=========================================
  Hits          1259      1259           

@jeff-hernandez jeff-hernandez self-requested a review January 20, 2021 15:45
Copy link
Contributor

@jeff-hernandez jeff-hernandez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! I left one minor comment. Other than that, we'll be ready to merge after updating with main.

@flowersw
Copy link
Contributor Author

happy to squash or rebase if y'all do that, too

@jeff-hernandez jeff-hernandez self-requested a review January 21, 2021 15:40
Copy link
Contributor

@jeff-hernandez jeff-hernandez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! If you want to automatically close issue #197 after merging this PR, you can link it by using a supported keyword in the pull request's description -- more info here. Thank you for the contribution @flowersw!

@flowersw flowersw changed the title Update EvalML section of tutorials Fixes #197 Fixes alteryx/compose#197 Update EvalML section of tutorials Jan 21, 2021
@flowersw flowersw changed the title Fixes alteryx/compose#197 Update EvalML section of tutorials Update EvalML section of tutorials Jan 21, 2021
@flowersw
Copy link
Contributor Author

Fixes #197

@flowersw
Copy link
Contributor Author

Ok, I think we're good. I'm assuming you'll merge it, as I don't seem to be able to. Thanks

@jeff-hernandez jeff-hernandez merged commit b9b6bf5 into alteryx:main Jan 22, 2021
@dsherry
Copy link

dsherry commented Jan 22, 2021

Thanks @flowersw ! Yes, the difference in output you saw is due to updates we've made to EvalML, including adding more models and fixing bugs.

@jeff-hernandez jeff-hernandez mentioned this pull request Feb 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants