Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling parameter values to improve GP hyperparameter optimization #38

Closed
dflemin3 opened this issue Mar 15, 2019 · 1 comment
Closed

Comments

@dflemin3
Copy link
Owner

In the original BAPE algorithm paper, Kandasamy+2015 scaled model parameter values between [0,1] using the appropriate simple linear transformation. Performing this scaling in approxposterior could be useful for convergence and numerical stability issues by keeping parameter values in a reasonable range, especially for metric scales.

This can be implemented without too much difficulty using the sklearn preprocessing module, e.g. the MinMaxScaler. Furthermore, the sklearn codebase is well-tested and robust, so it's inclusion shouldn't introduce too many dependency issues.

To do this, I could either use the bounds kwarg that stipulates the hard bounds for model parameters, or I could train the scaler on the GP's initial theta, although I think the former idea is more desirable relative to the latter.

@dflemin3 dflemin3 added this to the 0.3 release milestone Mar 15, 2019
@dflemin3
Copy link
Owner Author

Added the ability to scale parameters between (0,1) and am working on more robust scaling (#44) on the dev branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant