You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the original BAPE algorithm paper, Kandasamy+2015 scaled model parameter values between [0,1] using the appropriate simple linear transformation. Performing this scaling in approxposterior could be useful for convergence and numerical stability issues by keeping parameter values in a reasonable range, especially for metric scales.
This can be implemented without too much difficulty using the sklearn preprocessing module, e.g. the MinMaxScaler. Furthermore, the sklearn codebase is well-tested and robust, so it's inclusion shouldn't introduce too many dependency issues.
To do this, I could either use the bounds kwarg that stipulates the hard bounds for model parameters, or I could train the scaler on the GP's initial theta, although I think the former idea is more desirable relative to the latter.
The text was updated successfully, but these errors were encountered:
In the original BAPE algorithm paper, Kandasamy+2015 scaled model parameter values between [0,1] using the appropriate simple linear transformation. Performing this scaling in approxposterior could be useful for convergence and numerical stability issues by keeping parameter values in a reasonable range, especially for metric scales.
This can be implemented without too much difficulty using the sklearn preprocessing module, e.g. the MinMaxScaler. Furthermore, the sklearn codebase is well-tested and robust, so it's inclusion shouldn't introduce too many dependency issues.
To do this, I could either use the bounds kwarg that stipulates the hard bounds for model parameters, or I could train the scaler on the GP's initial theta, although I think the former idea is more desirable relative to the latter.
The text was updated successfully, but these errors were encountered: