-
Notifications
You must be signed in to change notification settings - Fork 52
[Feature Request] Stop tuning upon optimization convergence #98
Comments
@rohan-gt good question! Can you clarify what you mean by "early stopping"? Do you mean:
|
@richardliaw to stop the hyperparameter sweep. Aren't the schedulers supported by Ray Tune used for the same purpose? |
In general, we need to be able to look at some metric after each epoch to use Ray Tune's schedulers/early stopping algorithms to stop a hyperparameter sweep early. This is why we currently only early stop on estimators that have |
Hmm yeah; I think perhaps there is value to stopping the hyperparameter tuning if the top score is converged across the last X trials though (even before having fully evaluated all |
@richardliaw exactly. You just need to look at the CV score progression |
Is it possible enable early stopping to any algorithm that does not have partial_fit (eg. LogisticRegression or RandomForest) just by looking at the train and test (CV score) score progression across the trials?
The text was updated successfully, but these errors were encountered: