Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Early stopping using max_score #6

Closed
DavidFricker opened this issue Dec 17, 2020 · 2 comments
Closed

Early stopping using max_score #6

DavidFricker opened this issue Dec 17, 2020 · 2 comments

Comments

@DavidFricker
Copy link

I am trialling a use of Hyperactive in-conjunction with early stopping.

From my research in the code and docs of this package I can see that we can achieve early stopping by passing parameters along with the name of the chosen gradient free optimizer, like below (?):

optimizer = {"Bayesian": {"max_score": 0.9}}

However, the code below does not stop early and continues until all n_iter are completed, which is not the behaviour i expected. Please could you let me know what I am doing wrong trying to get early stopping to work.

Full example:

from hyperactive import Hyperactive
import time
import numpy as np

def my_model(para, X, y):
    time.sleep(3)
    return 0.9
  
search_config = {
    my_model: {'n_estimators': range(10, 200, 10)}
}

opt = Hyperactive(np.asarray([]), np.asarray([]), memory="short")
opt.search(search_config, n_jobs=1, n_iter=8, optimizer = {"Bayesian": {"max_score": 0.1}})
Set random start position
Thread 0 -> my_model: 100%|██████████| 8/8 [00:18<00:00,  2.25s/it, best_score=0.9, best_since_iter=0] 
best para = {'n_estimators': 180}
score     = 0.9 

Thanks

@SimonBlanke
Copy link
Owner

Hello David,

I am glad to see, that you are still using Hyperactive.

In the current version of v2.x Hyperactive cannot do an early stop. This feature will be implemented in v3.0 coming in a few weeks.

However, part of the current Hyperactive development is the separation of the optimization algorithms into a simpler api. It is called Gradient-Free-Optimizers and is basically Hyperactive-lite. It will be the optimization-back-end of Hyperactive in the future. But it can also be used by itself, without Hyperactive:
https://github.com/SimonBlanke/Gradient-Free-Optimizers

Your code would convert into the following with Gradient-Free-Optimizers:

import time
import numpy as np
from gradient_free_optimizers import BayesianOptimizer


def my_model(para):
    time.sleep(3)
    return 0.9

search_space = {"n_estimators": np.arange(10, 200, 10)}

c_time = time.time()
opt = BayesianOptimizer(search_space)
opt.search(my_model, n_iter=8, max_score=0.9)
diff_time = time.time() - c_time

print("\n Optimization time:", diff_time)

Gradient Free Optimizers is well tested and very easy to use, but it has a few restrictions compared to Hyperactive:

  • The search space must be numeric (numpy arrays)
  • No build-in multiprocessing
  • No access to optimizer-parameters at run time. (In Hyperactive 3.0)

To make the relationship between Hyperactive and Gradient Free Optimizers clear, I will add a section into the readme about the differences and purposes of both packages.

I hope I was able to help you. If you have additional questions or suggestions please let me know.

@DavidFricker
Copy link
Author

Hi Simon,

Thank you for your fast and in depth response!

I appreciate the work-around code you have provided - it is working great.
We will use Gradient Free Optimizers package and wait for hyperactive 3.x to be released, your approach makes a lot of sense :)

Thanks,
David

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants