Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I optimize a black-box function using Ax? #1077

Closed
Song-Hyeong-Yun opened this issue Aug 18, 2022 · 18 comments
Closed

How can I optimize a black-box function using Ax? #1077

Song-Hyeong-Yun opened this issue Aug 18, 2022 · 18 comments
Assignees
Labels
question Further information is requested

Comments

@Song-Hyeong-Yun
Copy link

Hello, I'm a chemical engineering researcher in South Korea.
I'd like to apply Bayesian optimization in my research area.
I want to optimize a black-box function but the optimization was implemented with a known function on the tutorial page. (ex. Branin or Hartmann6)
Therefore, I couldn't make an 'optimization config' and I failed to implement Bayesian optimization.

For example, I have 5 arms and 5 outputs of these parameters.
arm1=Arm(parameters = {'x1': 1.0, 'x2': 2.0})
arm2=Arm(parameters = {'x1': 3.0, 'x2': 6.0})
arm3=Arm(parameters = {'x1': -2.0, 'x2': 3.0})
arm4=Arm(parameters = {'x1': 4.0, 'x2': 4.0})
arm5=Arm(parameters = {'x1': 1.0, 'x2': 3.0})
output (10.0, 2.0, 20.0, 5.0, 5.0)

I want to use the GPEI model and get suggestions for the next parameters.
How can I code it? or There are any pages to figure it out?

@pcanaran
Copy link
Contributor

Hi @Song-Hyeong-Yun, thank you for reaching out. The example functions in the tutorials are used only as evaluation functions for demonstration.

To optimize for an arbitrary evaluation function, you can wrap it as the example here: https://ax.dev/tutorials/gpei_hartmann_service.html (Section: 3. Define how to evaluate trials). Your function will take a parameterization (e.g. {'x1': 1.0, 'x2': 2.0}) and return the evaluation outcome in the format "{metric_name -> (mean, SEM)}".

Then in your optimization config, you can specify the metric that your evaluation function returns.

With this setup, you can get a recommended set of candidates ("get_next_trial()") and record the evaluation outcome using your evaluation function ("complete_trial()") and repeat. If you are using the service API, please see section "4. Run optimization loop" on https://ax.dev/tutorials/gpei_hartmann_service.html.

I hope this helps. Please let us know if you have additional questions.

@lena-kashtelyan lena-kashtelyan added the question Further information is requested label Aug 18, 2022
@pcanaran
Copy link
Contributor

Hi @Song-Hyeong-Yun. I'll go ahead and close this issue. Please feel free to reach out again if you run into other issues or have other questions.

@Song-Hyeong-Yun
Copy link
Author

Hi @Song-Hyeong-Yun. I'll go ahead and close this issue. Please feel free to reach out again if you run into other issues or have other questions.

Thank you for your help.

@Song-Hyeong-Yun
Copy link
Author

Thanks to your advice, I could create an experiment and I inserted trials and metrics.
Here's my code.

`from ax import *
from ax import Metric
from ax import Objective

arm1=Arm(parameters = {'x1': 1.0, 'x2': 2.0})
arm2=Arm(parameters = {'x1': 3.0, 'x2': 6.0})
arm3=Arm(parameters = {'x1': -2.0, 'x2': 3.0})
arm4=Arm(parameters = {'x1': 4.0, 'x2': 4.0})
arm5=Arm(parameters = {'x1': 1.0, 'x2': 3.0})

range_param1 = RangeParameter(name="x1", lower=-3.0, upper=5.0, parameter_type=ParameterType.FLOAT)
range_param2 = RangeParameter(name="x2", lower=0.0, upper=7.0, parameter_type=ParameterType.FLOAT)

search_space = SearchSpace(
parameters=[range_param1, range_param2],
)

experiment = Experiment(
name="test_experiment",
search_space=search_space,
)

generator1 = GeneratorRun(arms=[arm1])
generator2 = GeneratorRun(arms=[arm2])
generator3 = GeneratorRun(arms=[arm3])
generator4 = GeneratorRun(arms=[arm4])
generator5 = GeneratorRun(arms=[arm5])

trial1 = experiment.new_trial(generator_run=generator1)
trial2 = experiment.new_trial(generator_run=generator2)
trial3 = experiment.new_trial(generator_run=generator3)
trial4 = experiment.new_trial(generator_run=generator4)
trial5 = experiment.new_trial(generator_run=generator5)

metric={'metric1':(10.0, 0.1), 'metric2' : (2.0, 0.1), 'metric3' : (20.0, 0.1),'metric4' : (5.0, 0.1), 'metric5' : (5.0, 0.1)}

fetch_data = experiment.fetch_data(metrics=metric)`

Here's my question.
I want to use the GPEI model and get suggestions for the next parameters.
How can I code it?
I went into your link, but I couldn't solve it.
Please help me. thank you.

@danielcohenlive
Copy link

I think you're confusing metrics with trial results, so all of the trial results should have the same metric. I rewrote your code using the metric "foo" and a blackbox() function you can rewrite. I used the service API because I find it much easier to code this problem and I think it'll work better for you too : )

from ax.service.ax_client import AxClient
ax_client = AxClient()


def blackbox(params):  # rewrite me
    return (params["x1"], 0.1)


ax_client.create_experiment(
    name="blackbox_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [-3.0, 5.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 7.0],
        },
    ],
    objective_name="foo",
    minimize=True,  # it could be false though
)

ax_client.attach_trial({'x1': 1.0, 'x2': 2.0})
ax_client.attach_trial({'x1': 3.0, 'x2': 6.0})
ax_client.attach_trial({'x1': -2.0, 'x2': 3.0})
ax_client.attach_trial({'x1': 4.0, 'x2': 4.0})
ax_client.attach_trial({'x1': 1.0, 'x2': 3.0})

ax_client.complete_trial(0, {'foo':(10.0, 0.1)})
ax_client.complete_trial(1, {'foo':(2.0, 0.1)})
ax_client.complete_trial(2, {'foo':(20.0, 0.1)})
ax_client.complete_trial(3, {'foo':(5.0, 0.1)})
ax_client.complete_trial(4, {'foo':(5.0, 0.1)})

for i in range(25):
    parameters, trial_index = ax_client.get_next_trial()
    ax_client.complete_trial(trial_index=trial_index, raw_data=blackbox(parameters))

best_parameters, values = ax_client.get_best_parameters()
print(best_parameters)

@danielcohenlive
Copy link

It will do SOBOL (5 random points before using GPEI) in this example. If you want to hack around this, execute this line before the loop

ax_client.generation_strategy._curr = ax_client.generation_strategy._steps[1]

@danielcohenlive danielcohenlive self-assigned this Aug 25, 2022
@Song-Hyeong-Yun
Copy link
Author

Thanks Daniel!

@danielcohenlive
Copy link

I'm going to close this again, but if you have further questions you can reopen it

@Song-Hyeong-Yun
Copy link
Author

Hello. Thanks to your help, there has been a lot of improvement in my research.
I want to optimize the product's yield when I change the reaction conditions using BO.

I inserted GenerationStrategy in the ax_client like this.

gs = GenerationStrategy( steps=[ GenerationStep( model=Models.BOTORCH_MODULAR, num_trials=-1, model_kwargs={ "surrogate": Surrogate(SingleTaskGP), "botorch_acqf_class": qKnowledgeGradient, } ) ] )

Here's my question. I don't know which to use, SingleTaskGP or HeteroskedasticSingleTaskGP.
I have read this page https://botorch.org/api/models.html#botorch.models.gp_regression.SingleTaskGP.
If noise changes from each batch, is it right to use HeteroskedasticSingleTaskGP?
Please help me.

@saitcakmak
Copy link
Contributor

saitcakmak commented Oct 3, 2022

@Song-Hyeong-Yun The choice of the model will depend on whether you have noise observations or not. If you don't pass surrogate in model_kwargs, it should select SingleTaskGP or FixedNoiseGP based on whether you observe noise or not.

If noise changes from each batch, is it right to use HeteroskedasticSingleTaskGP?

Depends on how it changes. Is the noise level changing based on the location of the observations in the search space? In that case, HeteroskedasticSingleTaskGP is likely a good choice, though there may be an issue with it based on #1178. FixedNoiseGP should also handle this case for the most part. Note that both of these will expect you to observe the noise standard deviation. If you do not observe the noise, then you should use SingleTaskGP.

@Song-Hyeong-Yun
Copy link
Author

Song-Hyeong-Yun commented Oct 5, 2022

@saitcakmak Thank you for your answer.
I read your answer and #1178 .
I chose to use the HeteroskedasticSingleTaskGP.

I attached and completed trials like this.
`[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 700} as trial 0.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 0 with data: {'yield': (208.8, 15.061374)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 700} as trial 1.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 1 with data: {'yield': (230.1, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 115, 'CT': 700} as trial 2.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 2 with data: {'yield': (170.4, 1.979899)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 115, 'CT': 700} as trial 3.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 3 with data: {'yield': (167.6, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 150, 'CT': 700} as trial 4.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 4 with data: {'yield': (197.1, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 190, 'CT': 700} as trial 5.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 5 with data: {'yield': (191.4, 2.404163)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 190, 'CT': 700} as trial 6.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 6 with data: {'yield': (188.0, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 210, 'CT': 700} as trial 7.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 7 with data: {'yield': (200.1, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 250, 'CT': 700} as trial 8.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 8 with data: {'yield': (167.8, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 450} as trial 9.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 9 with data: {'yield': (110.9, 10.0)}.

[INFO 10-04 10:24:48] ax.service.ax_client: Attached custom parameterization {'wt': 25, 'DT': 170, 'CT': 300} as trial 10.
[INFO 10-04 10:24:48] ax.service.ax_client: Completed trial 10 with data: {'yield': (123.7, 10.0)}.`

However, the problem occurs when I use:

parameters, trial_index = ax_client.get_next_trial()

The error message says this:
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_16124\518108282.py in
1 for i in range(2):
----> 2 parameters, trial_index = ax_client.get_next_trial()

~.conda\envs\BoTorch\lib\site-packages\ax\utils\common\executils.py in actual_wrapper(*args, **kwargs)
145 )
146 time.sleep(wait_interval)
--> 147 return func(*args, **kwargs)
148
149 # If we are here, it means the retries were finished but

~.conda\envs\BoTorch\lib\site-packages\ax\service\ax_client.py in get_next_trial(self, ttl_seconds, force)
464 try:
465 trial = self.experiment.new_trial(
--> 466 generator_run=self._gen_new_generator_run(), ttl_seconds=ttl_seconds
467 )
468 except MaxParallelismReachedException as e:

~.conda\envs\BoTorch\lib\site-packages\ax\service\ax_client.py in _gen_new_generator_run(self, n)
1553 n=n,
1554 pending_observations=self._get_pending_observation_features(
-> 1555 experiment=self.experiment
1556 ),
1557 )

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in gen(self, experiment, data, n, pending_observations, **kwargs)
336 n=n,
337 pending_observations=pending_observations,
--> 338 **kwargs,
339 )[0]
340

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in _gen_multiple(self, experiment, num_generator_runs, data, n, pending_observations, **kwargs)
453 self.experiment = experiment
454 self._maybe_move_to_next_step()
--> 455 self._fit_or_update_current_model(data=data)
456
457 # Make sure to not make too many generator runs and

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in _fit_or_update_current_model(self, data)
509 self._update_current_model(new_data=new_data)
510 else:
--> 511 self._fit_current_model(data=self._get_data_for_fit(passed_in_data=data))
512 self._save_seen_trial_indices()
513

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_strategy.py in _fit_current_model(self, data)
655 logger.debug(f"Fitting model with data for trials: {trial_indices_in_data}")
656
--> 657 self._curr.fit(experiment=self.experiment, data=data, **model_state_on_lgr)
658 self._model = self._curr.model_spec.fitted_model
659

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\generation_node.py in fit(self, experiment, data, search_space, optimization_config, **kwargs)
131 search_space=search_space,
132 optimization_config=optimization_config,
--> 133 **kwargs,
134 )
135

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\model_spec.py in fit(self, experiment, data, **model_kwargs)
128 experiment=experiment,
129 data=data,
--> 130 **combined_model_kwargs,
131 )
132

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\registry.py in call(self, search_space, experiment, data, silently_filter_kwargs, **kwargs)
345 data=data,
346 model=model,
--> 347 **bridge_kwargs,
348 )
349

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\torch.py in init(self, experiment, search_space, data, model, transforms, transform_configs, torch_dtype, torch_device, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, objective_thresholds, default_model_gen_options)
108 status_quo_features=status_quo_features,
109 optimization_config=optimization_config,
--> 110 fit_out_of_design=fit_out_of_design,
111 )
112

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\base.py in init(self, search_space, model, transforms, experiment, data, transform_configs, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, fit_abandoned)
181 search_space=search_space,
182 observation_features=obs_feats,
--> 183 observation_data=obs_data,
184 )
185 self.fit_time = time.time() - t_fit_start

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\torch.py in _fit(self, model, search_space, observation_features, observation_data)
134 search_space=search_space,
135 observation_features=observation_features,
--> 136 observation_data=observation_data,
137 )
138

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\array.py in _fit(self, model, search_space, observation_features, observation_data)
106 search_space_digest=search_space_digest,
107 metric_names=self.outcomes,
--> 108 candidate_metadata=candidate_metadata,
109 )
110

~.conda\envs\BoTorch\lib\site-packages\ax\modelbridge\torch.py in _model_fit(self, model, Xs, Ys, Yvars, search_space_digest, metric_names, candidate_metadata)
202 search_space_digest=search_space_digest,
203 metric_names=metric_names,
--> 204 candidate_metadata=candidate_metadata,
205 )
206

~.conda\envs\BoTorch\lib\site-packages\ax\models\torch\botorch_modular\model.py in fit(self, Xs, Ys, Yvars, search_space_digest, metric_names, target_fidelities, candidate_metadata, state_dict, refit)
202 candidate_metadata=candidate_metadata,
203 state_dict=state_dict,
--> 204 refit=refit,
205 )
206

~.conda\envs\BoTorch\lib\site-packages\ax\models\torch\botorch_modular\surrogate.py in fit(self, training_data, search_space_digest, metric_names, candidate_metadata, state_dict, refit)
276 training_data=training_data,
277 metric_names=metric_names,
--> 278 **dataclasses.asdict(search_space_digest),
279 )
280 if state_dict:

~.conda\envs\BoTorch\lib\site-packages\ax\models\torch\botorch_modular\surrogate.py in construct(self, training_data, **kwargs)
226
227 # pyre-ignore [45]
--> 228 self._model = self.botorch_model_class(**formatted_model_inputs)
229
230 def fit(

TypeError: init() missing 1 required positional argument: 'train_Yvar'`

I don't know why this error occurs because it seems that noise is properly included in the completed trial.
How can I fix this problem?
Please help me.
Thank you so much in advance!

@Balandat
Copy link
Contributor

Balandat commented Oct 7, 2022

Could you provide a full code sample for how you are defining the GenerationStrategy with the HeteroskedasticSingleTaskGP?

Taking a step back, what is the reason you want to use that model in the first place? Are you interested in modeling the noise level out-of-sample? Note that (despite the maybe somewhat unintuitive name) even a FixedNoiseGP will properly deal with heteroskeastic noise, and in fact is the default model that is chosen under the hood when not specifying a custom GenerationStrategy. There are basically two reasons you'd want to use a HeteroskedasticSingleTaskGP: (i) you want to be able to do out-of-sample noise predictions, which is required e.g. for lookahead acquisition functions such as KG, or
(ii) you want to regularize the noise observations since those themselves may be overly noisy.

Some more detail on this is here: pytorch/botorch#1436 (comment)

@bernardbeckerman
Copy link
Contributor

Hi @Song-Hyeong-Yun, I've adapted @danielcohenlive's example with a generation strategy taken from issue #1178 and am able to generate points as follows:

from botorch.acquisition.monte_carlo import qNoisyExpectedImprovement
from botorch.models import HeteroskedasticSingleTaskGP
from ax.models.torch.botorch_modular.surrogate import Surrogate
from ax.modelbridge.registry import Models
from ax.modelbridge.generation_node import GenerationStep
from ax.modelbridge.generation_strategy import GenerationStrategy
from ax.service.ax_client import AxClient

def blackbox(params):  # rewrite me
    return (params["x1"], 0.1)

gs = GenerationStrategy(
    steps=[
        GenerationStep(  # BayesOpt step
            model=Models.BOTORCH_MODULAR,
            # No limit on how many generator runs will be produced
            num_trials=-1,
            model_kwargs={  # Kwargs to pass to `BoTorchModel.__init__`
                "surrogate": Surrogate(HeteroskedasticSingleTaskGP),
                "botorch_acqf_class": qNoisyExpectedImprovement,
            },
        )
    ]
)

ax_client = AxClient(generation_strategy=gs)

ax_client.create_experiment(
    name="blackbox_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [-3.0, 5.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 7.0],
        },
    ],
    objective_name="foo",
    minimize=True,  # it could be false though
)

ax_client.attach_trial({'x1': 1.0, 'x2': 2.0})
ax_client.attach_trial({'x1': 3.0, 'x2': 6.0})
ax_client.attach_trial({'x1': -2.0, 'x2': 3.0})
ax_client.attach_trial({'x1': 4.0, 'x2': 4.0})
ax_client.attach_trial({'x1': 1.0, 'x2': 3.0})

ax_client.complete_trial(0, {'foo':(10.0, 0.1)})
ax_client.complete_trial(1, {'foo':(2.0, 0.1)})
ax_client.complete_trial(2, {'foo':(20.0, 0.1)})
ax_client.complete_trial(3, {'foo':(5.0, 0.1)})
ax_client.complete_trial(4, {'foo':(5.0, 0.1)})

for i in range(5):
    parameters, trial_index = ax_client.get_next_trial()
    ax_client.complete_trial(trial_index=trial_index, raw_data=blackbox(parameters))

best_parameters, values = ax_client.get_best_parameters()
print(best_parameters)

Note that you may want to keep the Sobol GenerationStep from #1178 to increase the random exploration of the space that will seed the BO model. Please let me know if following this example works for you, or provide a full code sample as @Balandat requests above.

@Song-Hyeong-Yun
Copy link
Author

Song-Hyeong-Yun commented Oct 10, 2022

@Balandat Thank you for your answer!

Could you provide a full code sample for how you are defining the GenerationStrategy with the HeteroskedasticSingleTaskGP?
Here's my full code

from botorch.acquisition.monte_carlo import qNoisyExpectedImprovement
from botorch.models import HeteroskedasticSingleTaskGP
from ax.models.torch.botorch_modular.surrogate import Surrogate
from ax.modelbridge.registry import Models
from ax.modelbridge.generation_node import GenerationStep
from ax.modelbridge.generation_strategy import GenerationStrategy
from ax.service.ax_client import AxClient

gs = GenerationStrategy(
    steps=[
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            # No limit on how many generator runs will be produced
            num_trials=-1,
            model_kwargs={  # Kwargs to pass to `BoTorchModel.__init__`
                "surrogate": Surrogate(HeteroskedasticSingleTaskGP),  
                "botorch_acqf_class": qKnowledgeGradient,
            }
        )
    ]
)

ax_client = AxClient(generation_strategy = gs)

ax_client.create_experiment(
    name="cobalt_catalyst",
    parameters=[
        {
            "name": "wt",
            "type": "range",
            "bounds": [0.0, 50.0],
            'value_type': 'int'
        },
        {
            "name": "DT",
            "type": "range",
            "bounds": [50.0, 300.0],
            'value_type': 'int'
        },
        {
            "name": "CT",
            "type": "range",
            "bounds": [300.0, 850.0],
            'value_type': 'int'
        }
    ],
    objective_name="Yield",
    minimize=False,  # it could be false though
)

ax_client.attach_trial({'wt': 25, 'DT': 170, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 170, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 115, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 115, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 150, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 190, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 190, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 210, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 250, 'CT':700})
ax_client.attach_trial({'wt': 25, 'DT': 170, 'CT':450})
ax_client.attach_trial({'wt': 25, 'DT': 170, 'CT':300})

ax_client.complete_trial(0, {'Yield':(208.8, 15.061374)})
ax_client.complete_trial(1, {'Yield':(230.1, 10.0)})
ax_client.complete_trial(2, {'Yield':(170.4, 1.979899)})
ax_client.complete_trial(3, {'Yield':(167.6, 10.0)})
ax_client.complete_trial(4, {'Yield':(197.1, 10.0)})
ax_client.complete_trial(5, {'Yield':(191.4, 2.404163)})
ax_client.complete_trial(6, {'Yield':(188.0, 10.0)})
ax_client.complete_trial(7, {'Yield':(200.1, 10.0)})
ax_client.complete_trial(8, {'Yield':(167.8, 10.0)})
ax_client.complete_trial(9, {'Yield':(110.9, 10.0)})
ax_client.complete_trial(10, {'Yield':(123.7, 10.0)})

for i in range(1):
    parameters, trial_index = ax_client.get_next_trial()

Taking a step back, what is the reason you want to use that model in the first place? Are you interested in modeling the noise level out-of-sample? Note that (despite the maybe somewhat unintuitive name) even a FixedNoiseGP will properly deal with heteroskeastic noise, and in fact is the default model that is chosen under the hood when not specifying a custom GenerationStrategy. There are basically two reasons you'd want to use a HeteroskedasticSingleTaskGP: (i) you want to be able to do out-of-sample noise predictions, which is required e.g. for lookahead acquisition functions such as KG, or (ii) you want to regularize the noise observations since those themselves may be overly noisy.

I didn't know that a FixedNoiseGP can deal with heteroskedastic noise. I just want to put different variances in each input parameter. In this situation, can I use a FixedNoiseGP?

@Balandat
Copy link
Contributor

I didn't know that a FixedNoiseGP can deal with heteroskedastic noise. I just want to put different variances in each input parameter. In this situation, can I use a FixedNoiseGP?

Yes. Basically if you don't specify a GenerationStrategy and pass in the SEMs with the data (as you do above) the default strategy will automatically choose a FixedNoiseGP under the hood and properly take into account the different noise levels at the different evaluations.

@Song-Hyeong-Yun
Copy link
Author

@Balandat
Thank you for your effort. I solved the problem thanks to you.

@Song-Hyeong-Yun
Copy link
Author

@bernardbeckerman
Thank you for your effort.
I misunderstood about FixedNoiseGP and HeteroskedasticSingleTaskGP.
I decided to choose FixedNoiseGP instead of HeteroskedasticSingleTaskGP.

Apart from that, the same error occurred when I executed your code.
However, you don't have to worry about this anymore.
I really appreciate your efforts.

@lena-kashtelyan
Copy link
Contributor

Closing as inactive. Please reopen if following up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants