Skip to content
This repository has been archived by the owner on Nov 14, 2023. It is now read-only.

Trial plateau stopper #156

Merged
merged 8 commits into from
Dec 14, 2020
Merged

Conversation

krfricke
Copy link
Contributor

@krfricke krfricke commented Dec 9, 2020

With the stop_on_plateau parameter, trials can be early stopped if their score does change over a number of trials.

If True, a default configuration will be used. If dict, the parameters will be passed to the respective stopper class. Can also be an instantiated TrialPlateauStopper object.

I'm happy to add an example to the docs, but would like to get initial feedback/review first.

Things to consider:

  • Naming
  • API (especially for configuring)
  • Should we move the TrialPlateauStopper to Tune? (cc @richardliaw)

Closes #98

@Yard1
Copy link
Member

Yard1 commented Dec 9, 2020

This seems like a feature Tune itself could use. It'd be odd to limit it to just tune-sklearn. Great work!

@@ -265,6 +268,11 @@ class TuneSearchCV(TuneBaseSearchCV):
determined by 'Pipeline.warm_start' or 'Pipeline.partial_fit'
capabilities, which are by default not supported by standard
SKlearn. Defaults to True.
stop_on_plateau (bool|dict|TrialPlateauStopper): Stop trials early if
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps it would be a good idea to just let users pass their own Stopper instance? That way users could just extend the Stopper class for their own purposes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you mean instead of these arguments we just support stopper or so and document how to pass a TrialPlateauStopper for this use case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I believe that would be a good idea. That way users could define their own Stoppers, or import other Stoppers from Tune - and we would not need to add special support for each of them in tune-sklearn.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes much sense.

@krfricke
Copy link
Contributor Author

I refactored the changes. The Tune stoppers are in this PR: ray-project/ray#12750
This PR now mostly contains testing and passing of custom stoppers to tune.run().

A change where I'd like to get your feedback on is that I introduced a "default metric" called objective in the trainable. The idea here is that we always have access to the optimization metric with this name. This is important e.g. for the TrialPlateauStopper, which needs to know which metric we optimize. The name can vary between average_test_score, average_test_True and average_test_False, if I understand the code correctly.

There might be a better way to achieve this, but this was straightforward to implement. Do you have any suggestions?

@Yard1
Copy link
Member

Yard1 commented Dec 10, 2020

@krfricke Looks great!

Just to clear up how refit works - if multimetric scoring is used, then the refit parameter must be a string key for a metric in the scoring dict. Any other refit value, including True and False that are allowed normally, will throw an exception in conjunction with multimetric scoring. Therefore, the names can be average_test_score and average_test_METRIC, where METRIC is dynamic and up to the user. For example:

score_dict = {"accuracy": accuracy_metric, "auc": auc_metric}

ts = TuneSearchCV(scoring=score_dict, refit=True) # Will throw an exception when fit is called: "When using multimetric scoring, refit must be the name of the scorer used to pick the best parameters. If not needed, set refit to False"

ts = TuneSearchCV(scoring=score_dict, refit="accuracy") #correct usage, accuracy will be used as the objective value, the name being average_test_accuracy

That being said, the approach you have taken will of course work regardless of that value is, without concern for its type. I don't think I can think of a better one and I believe that other sklearn wrappers use a similar approach as well.

@Yard1
Copy link
Member

Yard1 commented Dec 10, 2020

BTW. We'll need to update the readme too, I think.

@krfricke
Copy link
Contributor Author

Thanks for the explanation. I updated the README, but we will have to wait until ray-project/ray#12750 is merged so that the link works.

@krfricke
Copy link
Contributor Author

The PR is merged and I think the test errors are unrelated to this PR.

@richardliaw richardliaw changed the title Stop on trial plateau Trial plateau stopper Dec 14, 2020
@richardliaw
Copy link
Collaborator

stop_on_plateau is not provided as an option in this PR right?

@richardliaw richardliaw merged commit 2e9b187 into ray-project:master Dec 14, 2020
@krfricke krfricke deleted the stop-convergence branch December 14, 2020 21:50
@krfricke
Copy link
Contributor Author

That's right, we just pass stoppers to Ray Tune directly.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Stop tuning upon optimization convergence
3 participants