-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GENERAL SUPPORT]: Getting best predicted point of a botorch model #2636
Comments
Hi @RoeyYadgar, I'll follow up internally to see the best way to answer this. One quick follow-up is, when are you looking to obtain the best posterior mean point? E.g., is this after the experiment concludes and the last trial has been evaluated, or are you trying to obtain the best posterior mean point after each trial completes (and if so, why is that)? I'm asking because if it's the former, and no additional trials are being generated, you would only be training the model once (to produce the posterior mean minimizer) and there wouldn't be wasted compute. |
I'm indeed trying to do this after each trial. I do that to evaluate and analyze the performance of a simulated optimization problem.
I'm not sure I understand what you mean by this, Why is it that in the former option I'd only be training it once? |
I'd be interested to hear more about how your performance analysis incorporates the posterior mean minimizer.
If you were only performing best-posterior-mean acquisition at the end of the optimization, you ostensibly wouldn't have performed a GPEI acquisition since you would use that to generate the next point to sample and there is no next point to sample (since optimization is complete). cc @esantorella on how to use a new acquisition function on the already-trained surrogate model of a given modelbridge. |
Hi @RoeyYadgar , you might be able to get what you're looking for with If you want the best point out of the whole search space, not just the points evaluated so far, replacing the acquisition function with a BoTorch from ax.models.torch.botorch_modular.acquisition import Acquisition
from botorch.acquisition.analytic import PosteriorMean
from ax.models.torch_base import TorchOptConfig
generation_step = ...
model = generation_step._fitted_model.model
acq = Acquisition(
botorch_acqf_class=PosteriorMean,
surrogates=model.surrogates
search_space_digest=model.search_space_digest,
torch_opt_config=TorchOptConfig(...)
)
best_point, value, _ = acq.optimize(n=1, search_space_digest=model.search_space_digest) If this doesn't work, could you provide a reproducible example? I recommend working with the |
Hi,
I'm using this as a measure for "how well the optimization process is doing" as a function of the number of samples, and compare the predicted posterior mean minimizer (and the posterior variance) to the actual minimal value of the function. nothing too fancy :)
This works well, thanks! I'm thinking of just implementing a I also noticed that Thanks a lot :) |
Makes sense! The true function value at the model's predicted best point is known as "inference regret" (as you may know) and is useful for benchmarking. I'd also find this valuable. Ax has generic best-point functionality through BestPointMixin, but that only uses "in-sample" prediction on arms that have been tried so far.
Ah yes, good catch.
Hmm, I think you might run into the same problem that there is needed transform information on the I think what you want to do is change the
Hmm, this error message looks quite old, so it's possible it is not correct anymore. Could you send a reproducible example? If this is in fact working, I'd be more than happy to accept a PR removing the error and adding unit tests demonstrating that it works, or to add this to our backlog to fix. |
I see, wasn't aware of this. It can be useful as well, thanks!
What I ended up doing is create a
I see, this is really helpful, but I got a bit confused. If I understand, this is used in Also when I try to use it with Attaching a reproducible example of both attempts with ( from ax import (
Experiment,
Objective,
OptimizationConfig,
Models
)
from ax import (
Experiment,
Objective,
OptimizationConfig,
ParameterType,
RangeParameter,
SearchSpace,
)
from ax.metrics.l2norm import L2NormMetric
from ax import Runner
from ax.models.torch.botorch_defaults import recommend_best_out_of_sample_point
#Initalize an experiment with the example from https://ax.dev/tutorials/gpei_hartmann_developer.html
class MyRunner(Runner):
def run(self, trial):
trial_metadata = {"name": str(trial.index)}
return trial_metadata
search_space = SearchSpace(
parameters=[
RangeParameter(
name=f"x{i}", parameter_type=ParameterType.FLOAT, lower=0.0, upper=1.0
)
for i in range(2)
]
)
param_names = [f"x{i}" for i in range(2)]
optimization_config = OptimizationConfig(
objective=Objective(
metric=L2NormMetric(name="l2norm", param_names=param_names),
minimize=True,
))
exp = Experiment(
name="exp_test",
search_space=search_space,
optimization_config=optimization_config,
runner=MyRunner()
)
NUM_SOBOL_TRIALS = 5
sobol = Models.SOBOL(search_space=exp.search_space)
for i in range(NUM_SOBOL_TRIALS):
generator_run = sobol.gen(n=1)
trial = exp.new_trial(generator_run=generator_run)
trial.run()
trial.mark_completed()
#################################################################################################################
#Best point with Models.GPEI (which uses BotorchModel)
gpei_model = Models.GPEI(experiment=exp,data=exp.fetch_data())
print(f'Best sampled point GPEI : {gpei_model.model_best_point()}')
try:
gpei_model = Models.GPEI(experiment=exp,data=exp.fetch_data(),best_point_recommender = recommend_best_out_of_sample_point)
print(f'Best out of sample point GPEI : {gpei_model.model_best_point()}')
except Exception as e:
print(e)
#Best point with Models.BOTORCH_MODULAR (which uses BoTorchModel)
botorch_model = Models.BOTORCH_MODULAR(experiment=exp,data=exp.fetch_data())
print(f'Best sampled point BOTORCH MODULAR : {botorch_model.model_best_point()}')
#Create subclass of BoTorchModel to call surrogate.best_out_of_sample_point when using model_best_point
from ax.models.torch.botorch_modular.model import BoTorchModel
from ax.modelbridge.torch import TorchModelBridge
from ax.modelbridge import registry
from aenum import extend_enum
from ax.core import ObservationFeatures
class BotorchModelOOS(BoTorchModel):
def best_point(self,search_space_digest,torch_opt_config,options=None):
try:
return self.surrogate.best_out_of_sample_point(
search_space_digest=search_space_digest,
torch_opt_config=torch_opt_config,
)[0]
except ValueError:
return None
#register the model
registry.MODEL_KEY_TO_MODEL_SETUP["BOTORCH_MODULAR_OOS"] = registry.ModelSetup(
bridge_class=TorchModelBridge,
model_class=BotorchModelOOS,
transforms=registry.Cont_X_trans + registry.Y_trans,
standard_bridge_kwargs=registry.STANDARD_TORCH_BRIDGE_KWARGS,
)
extend_enum(registry.Models,"BOTORCH_MODULAR_OOS","BOTORCH_MODULAR_OOS")
try:
botorch_model = Models.BOTORCH_MODULAR_OOS(experiment=exp,data=exp.fetch_data())
#still giving an error, needs to return candidates[0], acqf_values in https://github.com/facebook/Ax/blob/e459a083f334170ad155911af06cf665a010e549/ax/models/torch/botorch_modular/surrogate.py#L690 ?
print(f'Best out of sample point BOTORCH_MODULAR : {botorch_model.model_best_point()}')
except Exception as e:
print(e)
try:
#giving None because of exception raised here? https://github.com/facebook/Ax/blob/e459a083f334170ad155911af06cf665a010e549/ax/models/torch/botorch_modular/surrogate.py#L662
print(f'Best out of sample point BOTORCH_MODULAR with fixed feature: {botorch_model.model_best_point(fixed_features=ObservationFeatures({"x0" : 0}))}')
except Exception as e:
print(e) |
Thanks for the repro! I'm able to reproduce this and getting the error you reported:
This loos like a bug to me. Under the hood, the BoTorch function And sorry about the |
That bug is fixed by #2652 . |
Closing since the question is answered and bug is fixed, but feel free to reopen if you have more questions, and we're noting that this functionality is important and ought to be supported better. |
Question
Hi,
I'm trying get the best predicted point (the point which minimizes the posterior mean instead of the best point observed so far) of a fitted model.
What I currently do is perform the optimization loop with the following GenerationStep:
And when i want to get the best predicted point I create a new model with PosteriorMean Acqusition class and generate a trial with it:
This indeed works but it needs to re-fit the data again (even though I already fitted that same data in the optimization step), is there a better way to do this which doesn't require fitting the data again?
Thanks :)
Please provide any relevant code snippet if applicable.
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: