Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated _evaluate_pipelines to consolidate side effects #1337

Merged
merged 10 commits into from
Nov 5, 2020

Conversation

christopherbunn
Copy link
Contributor

@christopherbunn christopherbunn commented Oct 22, 2020

Taking on a different approach with _evaluate side effects by restructuring and renaming to _evaluate_pipelines. See this comment for more info.
Resolves #1295

@codecov
Copy link

codecov bot commented Oct 26, 2020

Codecov Report

Merging #1337 into main will decrease coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1337      +/-   ##
==========================================
- Coverage   99.95%   99.95%   -0.00%     
==========================================
  Files         213      213              
  Lines       13938    13934       -4     
==========================================
- Hits        13931    13927       -4     
  Misses          7        7              
Impacted Files Coverage Δ
evalml/automl/automl_search.py 99.62% <100.00%> (ø)
evalml/tests/automl_tests/test_automl.py 100.00% <100.00%> (ø)
evalml/utils/logger.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 86a39b0...88e0ee5. Read the comment docs.

@christopherbunn christopherbunn changed the title Draft: Alternative to 1295 Updated _evaluate_pipelines to consolidate side effects Oct 26, 2020
@christopherbunn christopherbunn marked this pull request as ready for review October 26, 2020 22:08
Copy link
Contributor

@freddyaboulton freddyaboulton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@christopherbunn Looks great!

mock_next_batch.side_effect = [[dummy_binary_pipeline_class(parameters={}), dummy_binary_pipeline_class(parameters={})]]
automl = AutoMLSearch(problem_type='binary', allowed_pipelines=[dummy_binary_pipeline_class])
automl.search(X, y)
# Mock rankings so `best_pipeline` setting does not error out
with patch('evalml.automl.AutoMLSearch.rankings', new_callable=PropertyMock) as mock_rankings:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you change mock_evaluate_pipelines from .side_effect to .return_value?

So we need this this new PropertyMock because you're mocking _evaluate_pipelines which is what calls _add_result (which is what populates the data for the ranking table)?

I wonder if we even need this code block in the test. I think the point of the test is to verify that the search ends when it encounters a batch of all nan (which is what happens in the code block that doesn't have the property mock).

This would be fine to merge as is but I'm guessing we can simplify this test a bit without losing any coverage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you change mock_evaluate_pipelines from .side_effect to .return_value?

Good question. I changed it to .return_value as I needed the mock _evaluate_pipelines to return the entire list of pipeline scores. Passing in a list to .side_effect cause it to iterate through the list and only return each element.

RE: the code block, you're right about the reason why we have to populate the ranking table manually. My impression of the intent of this section was to show that having a np.nan score as one of the pipeline results won't result in raising the AutoMLSearchException. If this isn't a necessary check, then I'm good with cutting out this section.

Copy link
Contributor

@freddyaboulton freddyaboulton Oct 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can delete that code block then? The second code block returns a Nan in the second batch and the search doesn't terminate then but we can check that explicitly with assert mock_evaluate_pipelines.call_count == 3.

This isn't blocking merge but my vote is to remove code that provides redundant coverage while we're at it.

if add_single_pipeline:
add_single_pipeline = False

except KeyboardInterrupt:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can tell there isn't any change to the keyboard interrupt feature! What do you think @christopherbunn ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think technically if the user terminates the search while the next batch is being generated, it wouldn't get caught by this KeyboardInterrupt. In practice, getting the next batch takes so little time it's very unlikely that this will occur.

@christopherbunn christopherbunn force-pushed the 1295_`_evaluate`_changes branch 2 times, most recently from b2d304b to 3bf80f0 Compare October 30, 2020 15:10
return True

scores = self._evaluate_pipelines(pipelines, X, y, baseline=True)
if scores == []:
Copy link
Contributor

@dsherry dsherry Oct 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return len(scores) == 0 ?

Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@christopherbunn LGTM!

I left one question about show_batch_output. I also left a comment about deleting an old docstring. Otherwise, nothing blocking

Returns:
self
feature_types (list, optional): list of feature types, either numerical or categorical.
Categorical features will automatically be encoded
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@christopherbunn I think we deleted this deprecated feature_types field last week


except KeyboardInterrupt:
current_pipeline_batch = self._handle_keyboard_interrupt(pipeline, current_pipeline_batch)
if current_pipeline_batch == []:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style nit-pick: if len(current_pipeline_batch) == 0 I don't think there's much significant functional difference here lol, I just think checking length is more clear.

@@ -425,6 +427,7 @@ def search(self, X, y, data_checks="auto", show_iteration_plot=True):
if self.allowed_pipelines == []:
raise ValueError("No allowed pipelines to search")
if self.max_batches and self.max_iterations is None:
self.show_batch_output = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@christopherbunn what's this? Why do we need it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, we show the batch number only if the user specifies a number of max_batches. Since we use batching internally even if only max_iterations is specified, there isn't really a clean way to infer whether or not we want to show the number of batches other than setting a variable at the beginning.

@CLAassistant
Copy link

CLAassistant commented Nov 3, 2020

CLA assistant check
All committers have signed the CLA.

@christopherbunn christopherbunn force-pushed the 1295_`_evaluate`_changes branch 3 times, most recently from 8abf084 to 41d8163 Compare November 4, 2020 16:53
@dsherry dsherry merged commit 9451546 into main Nov 5, 2020
@dsherry
Copy link
Contributor

dsherry commented Nov 5, 2020

@christopherbunn and I saw intermittent failures in the linux CI tests on his branch. Since they were coming from pipeline tests and this PR only changes automl code, we conclude those failures aren't introduced by this PR. Will keep debugging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor AutoML Search for Parallel Workers
4 participants