-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I optimize a black-box function using Ax? #1077
Comments
Hi @Song-Hyeong-Yun, thank you for reaching out. The example functions in the tutorials are used only as evaluation functions for demonstration. To optimize for an arbitrary evaluation function, you can wrap it as the example here: https://ax.dev/tutorials/gpei_hartmann_service.html (Section: 3. Define how to evaluate trials). Your function will take a parameterization (e.g. {'x1': 1.0, 'x2': 2.0}) and return the evaluation outcome in the format "{metric_name -> (mean, SEM)}". Then in your optimization config, you can specify the metric that your evaluation function returns. With this setup, you can get a recommended set of candidates ("get_next_trial()") and record the evaluation outcome using your evaluation function ("complete_trial()") and repeat. If you are using the service API, please see section "4. Run optimization loop" on https://ax.dev/tutorials/gpei_hartmann_service.html. I hope this helps. Please let us know if you have additional questions. |
Hi @Song-Hyeong-Yun. I'll go ahead and close this issue. Please feel free to reach out again if you run into other issues or have other questions. |
Thank you for your help. |
Thanks to your advice, I could create an experiment and I inserted trials and metrics. `from ax import * arm1=Arm(parameters = {'x1': 1.0, 'x2': 2.0}) range_param1 = RangeParameter(name="x1", lower=-3.0, upper=5.0, parameter_type=ParameterType.FLOAT) search_space = SearchSpace( experiment = Experiment( generator1 = GeneratorRun(arms=[arm1]) trial1 = experiment.new_trial(generator_run=generator1) metric={'metric1':(10.0, 0.1), 'metric2' : (2.0, 0.1), 'metric3' : (20.0, 0.1),'metric4' : (5.0, 0.1), 'metric5' : (5.0, 0.1)} fetch_data = experiment.fetch_data(metrics=metric)` Here's my question. |
I think you're confusing metrics with trial results, so all of the trial results should have the same metric. I rewrote your code using the metric "foo" and a from ax.service.ax_client import AxClient
ax_client = AxClient()
def blackbox(params): # rewrite me
return (params["x1"], 0.1)
ax_client.create_experiment(
name="blackbox_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [-3.0, 5.0],
"value_type": "float", # Optional, defaults to inference from type of "bounds".
"log_scale": False, # Optional, defaults to False.
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 7.0],
},
],
objective_name="foo",
minimize=True, # it could be false though
)
ax_client.attach_trial({'x1': 1.0, 'x2': 2.0})
ax_client.attach_trial({'x1': 3.0, 'x2': 6.0})
ax_client.attach_trial({'x1': -2.0, 'x2': 3.0})
ax_client.attach_trial({'x1': 4.0, 'x2': 4.0})
ax_client.attach_trial({'x1': 1.0, 'x2': 3.0})
ax_client.complete_trial(0, {'foo':(10.0, 0.1)})
ax_client.complete_trial(1, {'foo':(2.0, 0.1)})
ax_client.complete_trial(2, {'foo':(20.0, 0.1)})
ax_client.complete_trial(3, {'foo':(5.0, 0.1)})
ax_client.complete_trial(4, {'foo':(5.0, 0.1)})
for i in range(25):
parameters, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(trial_index=trial_index, raw_data=blackbox(parameters))
best_parameters, values = ax_client.get_best_parameters()
print(best_parameters) |
It will do SOBOL (5 random points before using GPEI) in this example. If you want to hack around this, execute this line before the loop ax_client.generation_strategy._curr = ax_client.generation_strategy._steps[1] |
Thanks Daniel! |
I'm going to close this again, but if you have further questions you can reopen it |
Hello. Thanks to your help, there has been a lot of improvement in my research. I inserted GenerationStrategy in the ax_client like this.
Here's my question. I don't know which to use, SingleTaskGP or HeteroskedasticSingleTaskGP. |
@Song-Hyeong-Yun The choice of the model will depend on whether you have noise observations or not. If you don't pass
Depends on how it changes. Is the noise level changing based on the location of the observations in the search space? In that case, |
@saitcakmak Thank you for your answer. I attached and completed trials like this. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 700} as trial 1. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 115, 'CT': 700} as trial 2. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 115, 'CT': 700} as trial 3. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 150, 'CT': 700} as trial 4. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 190, 'CT': 700} as trial 5. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 190, 'CT': 700} as trial 6. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 210, 'CT': 700} as trial 7. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 250, 'CT': 700} as trial 8. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 450} as trial 9. [INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 300} as trial 10. However, the problem occurs when I use:
The error message says this: ~.conda\envs\BoTorch\lib\site-packages\ax\utils\common\executils.py in actual_wrapper(*args, **kwargs) ~.conda\envs\BoTorch\lib\site-packages\ax\service\ax_client.py in get_next_trial(self, ttl_seconds, force) ~.conda\envs\BoTorch\lib\site-packages\ax\service\ax_client.py in _gen_new_generator_run(self, n) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in gen(self, experiment, data, n, pending_observations, **kwargs) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in _gen_multiple(self, experiment, num_generator_runs, data, n, pending_observations, **kwargs) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in _fit_or_update_current_model(self, data) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in _fit_current_model(self, data) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_node.py in fit(self, experiment, data, search_space, optimization_config, **kwargs) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\model_spec.py in fit(self, experiment, data, **model_kwargs) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\registry.py in call(self, search_space, experiment, data, silently_filter_kwargs, **kwargs) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\torch.py in init(self, experiment, search_space, data, model, transforms, transform_configs, torch_dtype, torch_device, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, objective_thresholds, default_model_gen_options) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\base.py in init(self, search_space, model, transforms, experiment, data, transform_configs, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, fit_abandoned) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\torch.py in _fit(self, model, search_space, observation_features, observation_data) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\array.py in _fit(self, model, search_space, observation_features, observation_data) ~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\torch.py in _model_fit(self, model, Xs, Ys, Yvars, search_space_digest, metric_names, candidate_metadata) ~.conda\envs\BoTorch\lib\site-packages\ax\models\torch\botorch_modular\model.py in fit(self, Xs, Ys, Yvars, search_space_digest, metric_names, target_fidelities, candidate_metadata, state_dict, refit) ~.conda\envs\BoTorch\lib\site-packages\ax\models\torch\botorch_modular\surrogate.py in fit(self, training_data, search_space_digest, metric_names, candidate_metadata, state_dict, refit) ~.conda\envs\BoTorch\lib\site-packages\ax\models\torch\botorch_modular\surrogate.py in construct(self, training_data, **kwargs) TypeError: init() missing 1 required positional argument: 'train_Yvar'` I don't know why this error occurs because it seems that noise is properly included in the completed trial. |
Could you provide a full code sample for how you are defining the GenerationStrategy with the Taking a step back, what is the reason you want to use that model in the first place? Are you interested in modeling the noise level out-of-sample? Note that (despite the maybe somewhat unintuitive name) even a Some more detail on this is here: pytorch/botorch#1436 (comment) |
Hi @Song-Hyeong-Yun, I've adapted @danielcohenlive's example with a generation strategy taken from issue #1178 and am able to generate points as follows:
Note that you may want to keep the Sobol GenerationStep from #1178 to increase the random exploration of the space that will seed the BO model. Please let me know if following this example works for you, or provide a full code sample as @Balandat requests above. |
@Balandat Thank you for your answer!
|
Yes. Basically if you don't specify a GenerationStrategy and pass in the SEMs with the data (as you do above) the default strategy will automatically choose a |
@Balandat |
@bernardbeckerman Apart from that, the same error occurred when I executed your code. |
Closing as inactive. Please reopen if following up. |
Hello, I'm a chemical engineering researcher in South Korea.
I'd like to apply Bayesian optimization in my research area.
I want to optimize a black-box function but the optimization was implemented with a known function on the tutorial page. (ex. Branin or Hartmann6)
Therefore, I couldn't make an 'optimization config' and I failed to implement Bayesian optimization.
For example, I have 5 arms and 5 outputs of these parameters.
arm1=Arm(parameters = {'x1': 1.0, 'x2': 2.0})
arm2=Arm(parameters = {'x1': 3.0, 'x2': 6.0})
arm3=Arm(parameters = {'x1': -2.0, 'x2': 3.0})
arm4=Arm(parameters = {'x1': 4.0, 'x2': 4.0})
arm5=Arm(parameters = {'x1': 1.0, 'x2': 3.0})
output (10.0, 2.0, 20.0, 5.0, 5.0)
I want to use the GPEI model and get suggestions for the next parameters.
How can I code it? or There are any pages to figure it out?
The text was updated successfully, but these errors were encountered: