-
Notifications
You must be signed in to change notification settings - Fork 357
Description
Hi,
First thank you for doing the hard work in maintaining very helpful library.
I have one question and I would be glad if someone could help me.
We are using standard GPEI for max_x f(x) with bounds on x, something like:
gs = GenerationStrategy(steps=[
GenerationStep(Models.SOBOL, num_trials=max(5, int(num_concurrent * 1.5))),
GenerationStep(Models.GPEI, num_trials=-1)])
The negative function values can be in absolute value 50x bigger than positive ones.
So very fat tailed and skewed distribution of outcome values f(x).
It seems that our optimization gets stuck due to this for a long time as this seems to hurt out of sample prediction quality of the process.
What is the recommended approach to this?
It is not obvious if different choice of kernel would help here, Matern kernel seems to be pretty general already.
We tried truncation of negative values, restarting optimization with again taking some Sobol steps, that seems to help.
But it could be that we may be missing something obvious.
Would adding constraint on outcome as f(x) > bound be more proper thing to do, the docs https://botorch.org/docs/constraints probably indicate so and to my understanding this approach is already used in ax as actually qNoisyExpectedImprovement is used. I am not sure how outcome constraints are handled in detail I just skimmed over ideas and it seems promising approach,
So is this the way to go and more sample efficient than truncating outcome values?
Thank you for your help.