Skip to content
This repository has been archived by the owner on Nov 14, 2023. It is now read-only.

[Feature Request] Stop tuning upon optimization convergence #98

Closed
rohan-gt opened this issue Sep 11, 2020 · 6 comments · Fixed by #156
Closed

[Feature Request] Stop tuning upon optimization convergence #98

rohan-gt opened this issue Sep 11, 2020 · 6 comments · Fixed by #156
Assignees

Comments

@rohan-gt
Copy link

rohan-gt commented Sep 11, 2020

Is it possible enable early stopping to any algorithm that does not have partial_fit (eg. LogisticRegression or RandomForest) just by looking at the train and test (CV score) score progression across the trials?

@richardliaw
Copy link
Collaborator

@rohan-gt good question! Can you clarify what you mean by "early stopping"? Do you mean:

  1. Stop the hyperparameter sweep early, or
  2. Stop the training of individual runs early? (LogisticRegression has the ability to "warm_start", so we leverage that for incremental training).

@rohan-gt
Copy link
Author

rohan-gt commented Sep 11, 2020

@richardliaw to stop the hyperparameter sweep. Aren't the schedulers supported by Ray Tune used for the same purpose?

@inventormc
Copy link
Collaborator

In general, we need to be able to look at some metric after each epoch to use Ray Tune's schedulers/early stopping algorithms to stop a hyperparameter sweep early. This is why we currently only early stop on estimators that have partial_fit or warm_start -- we can look at the metric after each epoch. Other sklearn estimators will just fit all the way to completion without giving us a chance to look at metrics in between epochs.

@richardliaw
Copy link
Collaborator

Hmm yeah; I think perhaps there is value to stopping the hyperparameter tuning if the top score is converged across the last X trials though (even before having fully evaluated all n_trials trials).

@rohan-gt
Copy link
Author

@richardliaw exactly. You just need to look at the CV score progression

@richardliaw richardliaw changed the title [Feature Request] Early stopping for algorithms without partial_fit [Feature Request] Stop tuning upon optimization convergence Sep 13, 2020
@rohan-gt
Copy link
Author

rohan-gt commented Nov 9, 2020

In the graph below I'm taking the cumulative max of the CV score as the trials progress. Here we can see that one major optimum is reached after 8 trials and we can possibly end the optimization after checking a few trials after that

Screenshot 2020-11-10 at 12 11 15 AM

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants