-
Notifications
You must be signed in to change notification settings - Fork 445
Add support for batch qMC sampling #510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #510 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 84 84
Lines 5270 5274 +4
=========================================
+ Hits 5270 5274 +4
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this up and the solid testing.
Thanks for the feedback! I made the changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, thanks for addressing the changes. I'll import and land.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: #### New Features * Constrained Multi-Objective tutorial (#493) * Multi-fidelity Knowledge Gradient tutorial (#509) * Support for batch qMC sampling (#510) * New `evaluate` method for `qKnowledgeGradient` (#515) #### Compatibility * Require PyTorch >=1.6 (#535) * Require GPyTorch >=1.2 (#535) * Remove deprecated `botorch.gen module` (#532) #### Bug fixes * Fix bad backward-indexing of task_feature in `MultiTaskGP` (#485) * Fix bounds in constrained Branin-Currin test function (#491) * Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494) * Fix bug in `draw_sobol_samples` that did not use the proper effective dimension (#505) * Fix constraints for `q>1` in `qExpectedHypervolumeImprovement` (c80c4fd) * Only use feasible observations in partitioning for `qExpectedHypervolumeImprovement` in `get_acquisition_function` (#523) * Improved GPU compatibility for `PairwiseGP` (#537) #### Performance Improvements * Reduce memory footprint in `qExpectedHypervolumeImprovement` (#522) * Add `(q)ExpectedHypervolumeImprovement` to nonnegative functions [for better initialization] (#496) #### Other changes * Support batched `best_f` in `qExpectedImprovement` (#487) * Allow to return full tree of solutions in `OneShotAcquisitionFunction` (#488) * Added `construct_inputs` class method to models to programmatically construct the inputs to the constructor from a standardized `TrainingData` representation (#477, #482, 3621198) * Acqusiition function constructors now accept catch-all `**kwargs` options (#478, e5b6935) * Use `psd_safe_cholesky` in `qMaxValueEntropy` for better numerical stabilty (#518) * Added `WeightedMCMultiOutputObjective` (81d91fd) * Add ability to specify `outcomes` to all multi-output objectives (#524) * Return optimization output in `info_dict` for `fit_gpytorch_scipy` (#534) * Use `setuptools_scm` for versioning (#539) Pull Request resolved: #542 Reviewed By: sdaulton Differential Revision: D23645619 Pulled By: Balandat fbshipit-source-id: 0384f266cbd517aacd5778a6e2680336869bb31c
Motivation
See #507. Batch qMC samples are useful for generating raw_samples while optimizing batch GP models / acquisition functions.
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
Added unit tests to verify that the samples have proper shape (passed).
Used the script in #507 (comment) to verify that each batch of samples has low discrepancy.