Skip to content

Conversation

@bchen1116
Copy link
Contributor

@bchen1116 bchen1116 commented Mar 4, 2022

fix #3295

Don't use the threshold of the last cv fold. Instead, we set to none.

@bchen1116 bchen1116 self-assigned this Mar 4, 2022
@codecov
Copy link

codecov bot commented Mar 4, 2022

Codecov Report

Merging #3360 (8bd11ac) into main (0553d13) will increase coverage by 0.1%.
The diff coverage is 100.0%.

Impacted file tree graph

@@           Coverage Diff           @@
##            main   #3360     +/-   ##
=======================================
+ Coverage   99.6%   99.6%   +0.1%     
=======================================
  Files        329     329             
  Lines      32216   32229     +13     
=======================================
+ Hits       32086   32099     +13     
  Misses       130     130             
Impacted Files Coverage Δ
evalml/automl/automl_search.py 99.7% <100.0%> (ø)
.../automl_tests/test_automl_search_classification.py 96.5% <100.0%> (+0.1%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0553d13...8bd11ac. Read the comment docs.

@bchen1116 bchen1116 changed the title Fix threhsold return in AutoMLSearch Fix threshold return in AutoMLSearch Mar 4, 2022
Copy link
Collaborator

@jeremyliweishih jeremyliweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, anywhere we should add more info or change in our docs? Not sure if we have anything stating we recommend retraining pipelines after search.

new_pipeline = pipeline.new(parameters, random_seed=self.random_seed)
if is_binary(self.problem_type):
new_pipeline.threshold = pipeline.threshold
new_pipeline.threshold = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be better to take the average threshold over none? My concern with setting it to None is that it won’t reflect the threshold used to compute the scores in the leaderboard. In my opinion, taking the average threshold is closer to matching the mean_cv_score we use to sort the leaderboard.

Copy link
Contributor

@freddyaboulton freddyaboulton Mar 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding this comment, @bchen1116 brings up the good point that since get_pipeline returns untrained pipelines, None is a valid return value since it will be "replaced" once the pipeline is trained.

My concern with that reasoning is that it only holds if users train the pipeline with automl.train_pipelines. In fact, we decided to carry over the threshold in get_pipeline because a user was confused as to why the score from automl.best_pipeline was different from the score they got when they called fit and score themselves. #2844

It now seems clear to me that the underlying issue here is that AutoMLSearch is tuning the threshold without the user being aware of it. This makes it hard to recreate the training procedure that AutoMLSearch did once the pipelines are exported.

I will approve this PR and I will file an issue to track the problem around users not being aware of the threshold tuning. @bchen1116 Can you write a unit test (if it's not already present) that automl.best_pipeline.score will produce the score as automl.train_pipelines followed by score ? Something like this maybe

from evalml.demos import load_breast_cancer
from evalml.automl import AutoMLSearch
from evalml.preprocessing import split_data

X, y = load_breast_cancer()
X_train, X_valid, y_train, y_valid = split_data(X, y, "binary")

automl = AutoMLSearch(X_train, y_train, "binary", max_batches=4, ensembling=True, verbose=True,
                      automl_algorithm="default")
automl.search()

best_pipeline_score = automl.best_pipeline.score(X_valid, y_valid, objectives=["F1"])

pl = automl.get_pipeline(1)
pl = automl.train_pipelines([pl])[pl.name]
manual_score = pl.score(X_valid, y_valid, objectives=["F1"])

assert best_pipeline_score == manual_score

@bchen1116 bchen1116 merged commit 5a03ca6 into main Mar 7, 2022
@chukarsten chukarsten mentioned this pull request Mar 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AutoMLSearch uses optimal threshold of last fold in get_pipeline for binary classification

4 participants