-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to force models to make more exploration #460
Comments
So I guess it really depends on the behavior of Orthogonal to that, another challenge may be the choice parameters. There is a know issue with our optimization strategy that can result in excessive repeated evaluations of the same choice values. We have a fix for this in the works, but in the meantime maybe you can help identify whether that's the issue for you here. If you know the optimal choice parameter values, could you remove them from the search space (hard-code them to the optimal values in your evaluation function), and see whether you still observe the same behavior? If not, then this issue is likely the culprit here. |
Thank you for the quick response,
I will remove choice parameters and hardcode them as you recommend. Evaluating the objective function is a little costly, so it may take a few days to finish. I will post the results as soon as possible. Thank you. |
@ertandemiral, we are currently wrapping up work on a new |
Thank you @lena-kashtelyan , I managed to plug in UCB acquisition function into Ax by creating
On the other hand, when the choice parameters are excluded from the search space (as @Balandat suggested), searching behavior of the models are affected positively such that they always converge the global minimum point with increased exploration behavior (below figure). Almost all models especially I have another question about the issue of easily plugging in the BoTorch acquisition functions. I also want to conduct active learning experiment in Ax utilizing BoTorch
Thank you. |
@ertandemiral I'm glad to hear that removing the choice parameters seems to work for you! re: the |
@ertandemiral, I believe @ldworkin is correct, cc @Balandat in case he has thoughts on the setup |
Yeah qNIPV is agnostic to the direction, the goal is to minimize a global measure of uncertainty of the model, so there is no better or worse w.r.t. the function values. |
Thank you for your comments. |
Sorry to revive this thread, but it seems like the most appropriate place to ask. I have something quite similar to @ertandemiral's set-up above, though I can't work out what dimension My assumption was that it should be N x D - N sampled points from across the D-dimensional search space, but I think I might be missing a batch dim. Can anyone advise what Thanks so much! |
Hi @samueljamesbell , I have updated recently my setup after the addition of
To make run above code, input constructor for acquisition class
Dimension problem may occur due to the |
Hey @ertandemiral, thanks so much for this! Turns out this line:
was exactly what I was missing. Thanks for the help! |
Hello, @ertandemiral solution is working great but did you see the slicing results ? Let's say I use exactly the same code but with objective function : f = x["x1"]**2.0, Do you understand what's happening here ? Thanks for the help ! |
Hello @Kh-im , I obtained properly fitting slicing plot after running experiment with your objective function: (installed ax version is 0.2.9) |
xref: pytorch/botorch#422 |
Hi,
I would like to thank everyone who contributed to this great library. It enables easy use of Bayesian optimization to solve problems with the state of the art algorithms.
I have implemented Ax for my single objective design optimization study. Here is the code snippet:
`
from ax.service.ax_client import AxClient
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.registry import Models
def objective_function(x):
# region of f calculation
# gives 'ErrorDesign' in case of error, otherwise float.
return {"f": (f, 0.0)}
gs = GenerationStrategy(
steps=[GenerationStep(model=Models.SOBOL,num_trials =20),
GenerationStep(model=Models.GPMES,num_trials=-1),
])
ax_client = AxClient(generation_strategy=gs)
ax_client.create_experiment(
name="single_objective_design",
parameters=[
{"name": "x1", "type": "range","bounds": [0.2, 1.0],"value_type": "float"},
{"name": "x2", "type": "range","bounds": [2.0, 6.0],"value_type": "float"},
{"name": "x3", "type": "range","bounds": [0.2, 1.0],"value_type": "float"},
{"name": "x4", "type": "range","bounds": [1.7, 8.7],"value_type": "float"},
{"name": "x5", "type": "range","bounds": [ 0, 25],"value_type": "int"},
{"name": "x6"," type": "range","bounds": [4.0,12.0],"value_type": "float"},
{"name": "x7", "type": "range","bounds": [2.0, 5.0],"value_type": "float"},
{"name": "x8", "type": "range","bounds": [0.2, 1.0],"value_type": "float"},
{"name": "x9", "type": "range","bounds": [80., 95.],"value_type": "float"},
{"name": "x10","type": "range","bounds": [ 0, 25],"value_type": "int"},
{"name": "x11","type": "choice","values":["4","8","12","16"],"value_type": "str"},
{"name": "x12","type": "choice","values":["4","8","12","16"],"value_type": "str"},
],
objective_name="f",
minimize=True)
for _ in range(200):
trial_params, trial_index = ax_client.get_next_trial()
data = objective_function(trial_params)
if data["f"][0] == 'ErrorDesign':
ax_client.log_trial_failure(trial_index=trial_index)
else:
ax_client.complete_trial(trial_index=trial_index, raw_data=data["f"])
`
I have 12 design parameters (10 ranges, 2 choices) to be optimized and benefit service API with generation strategies ([sobol + gpmes, sobol + gpei, sobol + botorch, sobol + gpkg]) as seen in the code snippet. I am using python3.8 and the latest versions of botorch, gpytorch, and torch libraries.
Below is the history plot showing objective values with respect to iteration number for different models after running code respectively. I have also added the history of design parameters for the GPEI Model.
My question is about the non-explorative search behavior of the models after 20 sobol iterations. As you see from the objective history figure, successive designs have close objective values. Indeed, I would expect the code to do more exploration since the search space is quite large, but each model quickly converge some local minimum and continue to search around that minimum. By the way, the global minimum of the objective function is around -3.6.
I have tried the followings, but code behavior is not much affected:
Any help to force these generation strategies into making more exploration would be appreciated.
Thanks in advance.
The text was updated successfully, but these errors were encountered: