-
Notifications
You must be signed in to change notification settings - Fork 91
Fix threshold return in AutoMLSearch #3360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #3360 +/- ##
=======================================
+ Coverage 99.6% 99.6% +0.1%
=======================================
Files 329 329
Lines 32216 32229 +13
=======================================
+ Hits 32086 32099 +13
Misses 130 130
Continue to review full report at Codecov.
|
jeremyliweishih
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, anywhere we should add more info or change in our docs? Not sure if we have anything stating we recommend retraining pipelines after search.
| new_pipeline = pipeline.new(parameters, random_seed=self.random_seed) | ||
| if is_binary(self.problem_type): | ||
| new_pipeline.threshold = pipeline.threshold | ||
| new_pipeline.threshold = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be better to take the average threshold over none? My concern with setting it to None is that it won’t reflect the threshold used to compute the scores in the leaderboard. In my opinion, taking the average threshold is closer to matching the mean_cv_score we use to sort the leaderboard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding this comment, @bchen1116 brings up the good point that since get_pipeline returns untrained pipelines, None is a valid return value since it will be "replaced" once the pipeline is trained.
My concern with that reasoning is that it only holds if users train the pipeline with automl.train_pipelines. In fact, we decided to carry over the threshold in get_pipeline because a user was confused as to why the score from automl.best_pipeline was different from the score they got when they called fit and score themselves. #2844
It now seems clear to me that the underlying issue here is that AutoMLSearch is tuning the threshold without the user being aware of it. This makes it hard to recreate the training procedure that AutoMLSearch did once the pipelines are exported.
I will approve this PR and I will file an issue to track the problem around users not being aware of the threshold tuning. @bchen1116 Can you write a unit test (if it's not already present) that automl.best_pipeline.score will produce the score as automl.train_pipelines followed by score ? Something like this maybe
from evalml.demos import load_breast_cancer
from evalml.automl import AutoMLSearch
from evalml.preprocessing import split_data
X, y = load_breast_cancer()
X_train, X_valid, y_train, y_valid = split_data(X, y, "binary")
automl = AutoMLSearch(X_train, y_train, "binary", max_batches=4, ensembling=True, verbose=True,
automl_algorithm="default")
automl.search()
best_pipeline_score = automl.best_pipeline.score(X_valid, y_valid, objectives=["F1"])
pl = automl.get_pipeline(1)
pl = automl.train_pipelines([pl])[pl.name]
manual_score = pl.score(X_valid, y_valid, objectives=["F1"])
assert best_pipeline_score == manual_score
fix #3295
Don't use the threshold of the last cv fold. Instead, we set to none.