New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Transforming objective when passing best_f
to ProbabilityOfImprovement
, etc.
#2401
Comments
HI @IanDelbridge, yes, that's correct, and your solution is as well. It's actually not that hacky at all, all things considered. Ideally, we'd have an automated way of transforming the inputs to the acquisition functions, but since with the modular setup you could have any number and kinds of those with different inputs it's not obvious how to tell what to transform. Maybe a convenience feature could be something like adding a |
Hi Max, thanks, I appreciate your response! I think the part that sets off alarms telling me my solution is a hack is the way I am doing the transform. Is there a better way to apply the transform and inverse transforms without making fake observation features? Something like
|
I think we should be able to easily expose a |
Some transforms for ObservationData require knowing ObservationFeatures. Particularly StratifiedStandardizeY, which standardizes Y but stratified on some conditions on X (https://github.com/facebook/Ax/blob/main/ax/modelbridge/transforms/stratified_standardize_y.py#L34). It is used for multi-task modeling, where data from different tasks may have very different scales and so need to be standardized separately. This is why We could implement a |
Hi, I am running a basic
SingleTaskGP
-based optimization, and I would like to returnP(f(x) > 0)
, the probability of improving over a fixed objective baseline of 0. This is for a downstream system to decide the value of the candidate points and decide whether to stop optimization or not.My understanding is that I should be able to get this by computing the
ProbabilityOfImprovement
acquisition function and supplying{"best_f": 0}
as follows:I think, though, that Ax transforms on the observation data before passing to BoTorch, and I'm not sure if 0 in the original space maps to 0 in the transformed space. Is that true? And if so, what is the recommended way of transforming the outcome?
The very hacky solution I've come up with looks like this, and I'm not sure if it's correct:
Thanks!
The text was updated successfully, but these errors were encountered: