Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1692 iterative graph not showing in docs #3592

Merged
merged 36 commits into from Jul 7, 2022

Conversation

MichaelFu512
Copy link
Contributor

@MichaelFu512 MichaelFu512 commented Jun 28, 2022

Pull Request Description

Fixes the iterative graph not showing up in docs. This is a placeholder fix because the docs do not actually load the dynamic iterative graph, so what this pr does is, after searching, it graphs all the points and displays that.

I changed all the docs that search to not show the iterative graph since it will have just shown the user empty graphs.

Closes #1692


After creating the pull request: in order to pass the release_notes_updated check you will need to update the "Future Release" section of docs/source/release_notes.rst to include this pull request by adding :pr:123.

@codecov
Copy link

codecov bot commented Jun 28, 2022

Codecov Report

Merging #3592 (7b343fe) into main (426f2b3) will increase coverage by 0.1%.
The diff coverage is 100.0%.

@@           Coverage Diff           @@
##            main   #3592     +/-   ##
=======================================
+ Coverage   99.7%   99.7%   +0.1%     
=======================================
  Files        335     335             
  Lines      33381   33387      +6     
=======================================
+ Hits       33252   33258      +6     
  Misses       129     129             
Impacted Files Coverage Δ
evalml/automl/automl_search.py 99.6% <100.0%> (+0.1%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 426f2b3...7b343fe. Read the comment docs.

Copy link
Contributor

@eccabay eccabay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I fell down into a rabbit hole of iteration plots while reviewing this, and I don't actually think this is a much of a band-aid solution as I originally thought. I left a comment about renaming the show_iteration_plot argument, and I think if we do the refactoring I mentioned there this will actually make a lot of sense.

Also, I don't think we should change the default behavior. It makes it easier to run the docs, but harder on our users to get that interactive plot. I would rather we update the docs to set show_iteration_plot (interactive_plot after renaming) to be False and add a note there mentioning this is a documentation workaround and you'll see an interactive plot by default.

self.search_iteration_plot = self.plot.search_iteration_plot(
interactive_plot=show_iteration_plot
)
return self.search_iteration_plot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to return this? Does it not render in the docs without it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I remember, without returning the self.search_iteration_plot, the docs don't render.

I could use figure.show() but that causes tests to take a really long time. (https://github.com/alteryx/evalml/runs/7100924419?check_suite_focus=true)

TL;DR: the automl tests was taking over an hour when it usually takes like 20-25 minutes when I used figure.show(). When I ran the test locally, figure.show() caused every test from automl to open a web browser tab and show me the graph, multiple times (before I stopped it manually safari had like 30+ tabs of just graphs). I suspect that's the reason why it takes so long (for whatever reason).

I tried to mess around with figure.show() parameters, but couldn't figure out a way to render the graph so that:

  1. It's interactive (and not like a jpg or png of the graph)
  2. It doesn't make the local test open up a browser every time it runs automl.search()
  3. Do both of the above and actually render when I build the doc using sphinx (sometimes it loads locally, then I check and it gives an error message when I upload it to github).

Of course there's a high chance I'm missing something pretty obvious (maybe I could try making it so that if figure.show() tries to use your web-browser to render it, then it doesn't?).

Comment on lines 1025 to 1028
if not show_iteration_plot:
self.search_iteration_plot = self.plot.search_iteration_plot(
interactive_plot=show_iteration_plot
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a really hard time understanding what this does. After falling down an iteration_plot rabbit hole, I think we should rename the show_iteration_plot argument to interactive_plot, to match the arg that's passed into search_iteration_plot.

In my mind, that's how this logic makes sense. If interactive_plot is True, we call self.plot.search_iteration_plot up around line 910, which will create the interactive plot as we know and love. If interactive_plot is False and verbose is True (the other scenario where we'd want a plot), we run this code at the end to generate that final post-search plot that the docs require.

Could you refactor this section and the section around line 910 to follow that logic? It'd keep things much simpler and easier to follow.

Copy link
Contributor

@eccabay eccabay Jul 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be a little clearer here, I think this section could look like

if self.verbose and not interactive_plot:
    self.search_iteration_plot = self.plot.search_iteration_plot(interactive_plot=interactive_plot)

Comment on lines 1013 to 1024
# go = import_or_raise(
# "plotly.graph_objects",
# error_msg="Cannot find dependency plotly.graph_objects",
# )
# title = "Pipeline Search: Iteration vs. {}<br><sub>Gray marker indicates the score at current iteration</sub>".format(
# self.objective.name
# )
# layout = {
# "title": title,
# "xaxis": {"title": "Iteration", "rangemode": "tozero"},
# "yaxis": {"title": "Score"},
# }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need commented out code!

Copy link
Collaborator

@chukarsten chukarsten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Becca's left some good comments for addressing. Please keep an eye out for commiting a commented code. We find that a good way to prevent this is, while your PR is in draft status, doing a PR review on yourself and look over the diff. Review your PR as if you were an approver before changing status to ready!

},
"vscode": {
"interpreter": {
"hash": "425958b484b95e5a5ab632cc76b33302e47efa72ef6819296d6f51644ebb328b"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't recall adding any of this, I wonder where it came from?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you open the notebook using vscode? Not sure if make lint-fix covers this part of the notebook but you could always manually remove it by opening the notebook in a text editor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did open the notebook using vscode. I'll remove these from now on.

self.search_iteration_plot = self.plot.search_iteration_plot(
interactive_plot=interactive_plot
)
if pio.renderers.default != "browser":
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This stops the automl tests from mass opening graphs in the browser, which was causing the tests to be really slow.

@@ -281,7 +281,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure where these metadata changes are coming from.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the same issue - if anyone has a way to fix this let me know, my current solution is to edit the file here in GitHub to remove these (same with the vscode hash down below).

@MichaelFu512 MichaelFu512 requested a review from eccabay July 5, 2022 22:57
@MichaelFu512 MichaelFu512 enabled auto-merge (squash) July 5, 2022 23:26
Copy link
Contributor

@eccabay eccabay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks for making these changes!

@@ -281,7 +281,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the same issue - if anyone has a way to fix this let me know, my current solution is to edit the file here in GitHub to remove these (same with the vscode hash down below).

@@ -979,7 +979,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.8.13 ('test123')",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did this get here?

)
if pio.renderers.default != "browser":
self.search_iteration_plot.show()
# return self.search_iteration_plot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment should be removed!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aghhhh, I somehow missed that.

Comment on lines 870 to 872

Returns:
A plotly figure of the completed search
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With using the browser check instead, we no longer need to have this in the docstring!

Copy link
Collaborator

@chukarsten chukarsten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks Michael!

@MichaelFu512 MichaelFu512 merged commit 26896e9 into main Jul 7, 2022
@MichaelFu512 MichaelFu512 deleted the 1692-Iterative-Graph-Not-Showing branch July 7, 2022 20:19
@MichaelFu512 MichaelFu512 restored the 1692-Iterative-Graph-Not-Showing branch July 7, 2022 21:22
@chukarsten chukarsten deleted the 1692-Iterative-Graph-Not-Showing branch July 7, 2022 23:37
@chukarsten chukarsten mentioned this pull request Jul 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AutoMLSearch plots not appearing in Docs
4 participants