-
Notifications
You must be signed in to change notification settings - Fork 445
Implementing evaluate qKG as requested in #350 #515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #515 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 84 84
Lines 5274 5316 +42
=========================================
+ Hits 5274 5316 +42
Continue to review full report at Codecov.
|
One more thing: I left out support for constraints / fixed_features, since |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, thanks a lot for putting this up. I have a number of inline nits, nothing major.
As you indicated, tests take a long time, b/c you're running the actual model with a ton of input combinations, which really adds up. Let me see if I can help out with mocking out these tests.
I updated the code based on the review, and reduced the number of test combinations. The updated tests take ~200ms total. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks for bringing down the test complexity. We probably should do an audit for that to cut unnecessary tests .
Apart from one little nit this looks great. Can you update then I'll merge it in.
Thanks!
Co-authored-by: Max Balandat <Balandat@users.noreply.github.com>
Updated! Thanks for the review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
My bad, committed from GitHub without testing. It is fixed now |
I restructured |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@saitcakmak has updated the pull request. You must reimport the pull request before landing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Balandat has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Many thanks for the contribution, @saitcakmak ! |
Happy to help! Thanks for the reviews, @Balandat ! |
Summary: #### New Features * Constrained Multi-Objective tutorial (#493) * Multi-fidelity Knowledge Gradient tutorial (#509) * Support for batch qMC sampling (#510) * New `evaluate` method for `qKnowledgeGradient` (#515) #### Compatibility * Require PyTorch >=1.6 (#535) * Require GPyTorch >=1.2 (#535) * Remove deprecated `botorch.gen module` (#532) #### Bug fixes * Fix bad backward-indexing of task_feature in `MultiTaskGP` (#485) * Fix bounds in constrained Branin-Currin test function (#491) * Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494) * Fix bug in `draw_sobol_samples` that did not use the proper effective dimension (#505) * Fix constraints for `q>1` in `qExpectedHypervolumeImprovement` (c80c4fd) * Only use feasible observations in partitioning for `qExpectedHypervolumeImprovement` in `get_acquisition_function` (#523) * Improved GPU compatibility for `PairwiseGP` (#537) #### Performance Improvements * Reduce memory footprint in `qExpectedHypervolumeImprovement` (#522) * Add `(q)ExpectedHypervolumeImprovement` to nonnegative functions [for better initialization] (#496) #### Other changes * Support batched `best_f` in `qExpectedImprovement` (#487) * Allow to return full tree of solutions in `OneShotAcquisitionFunction` (#488) * Added `construct_inputs` class method to models to programmatically construct the inputs to the constructor from a standardized `TrainingData` representation (#477, #482, 3621198) * Acqusiition function constructors now accept catch-all `**kwargs` options (#478, e5b6935) * Use `psd_safe_cholesky` in `qMaxValueEntropy` for better numerical stabilty (#518) * Added `WeightedMCMultiOutputObjective` (81d91fd) * Add ability to specify `outcomes` to all multi-output objectives (#524) * Return optimization output in `info_dict` for `fit_gpytorch_scipy` (#534) * Use `setuptools_scm` for versioning (#539) Pull Request resolved: #542 Reviewed By: sdaulton Differential Revision: D23645619 Pulled By: Balandat fbshipit-source-id: 0384f266cbd517aacd5778a6e2680336869bb31c
Motivation
Resolves #350. The one-shot implementation of KG is not suitable for evaluating the KG value of the candidates. This implements an
evaluate_kg
method that returns the acquisition value of the candidates by solving the inner optimization problem.Have you read the Contributing Guidelines on pull requests?
Yes.
Test Plan
evaluate_kg
value of the optimizer ofoptimize_acqf(qKG)
to a set of randomly drawn candidates - the optimizer should have the largest value; ii) comparing the value reported bysolution, value = optimize_acqf(qKG)
toevaluate_kg(solution)
-evaluate_kg
should return a larger value as it does global optimization. Base script: test_eval_kg.txt.Changes
qKnowledgeGradiend.evaluate_kg()
method. This generates the fantasies, the value function, callsgen_value_function_initial_conditions
to generate initial conditions, and usesgen_candidates_scipy
to optimize them as a big batch. Selects the batch-wise maximizer, and returns the average over fantasies.gen_value_function_initial_conditions
inbotorch.optim.initializers
. Much likegen_one_shot_kg_initial_conditions
, this first solves the current problem. The solutions to current problem are then used to generate1 - frac_random
fraction of the raw samples, and the rest are generated usingdraw_sobol_samples
. All raw samples are then evaluated, and the initial conditions are generated by passing the raw samples and their evaluations toinitialize_q_batch
.initialize_q_batch
inbotorch.optim.initializers
to allow for inputX
of shapeb x batch_shape x q x d
andY
of shapeb x batch_shape
, with a corresponding output shape ofn x batch_shape x q x d
. The batches are processed independently to ensure the quality of initial conditions corresponding to each batch.Things to address:
TODO:
gen_value_function_initial_conditions
be added tobotorch.optim.initializers.__init__.py
?MockModel
to work for testingevaluate_kg
. Where I failed is in optimizing the current problem ingen_value_function_initial_conditions
, where thePosteriorMean
of theMockModel
would raise aforward
not found error. I tried topatch
it but couldn't get it to work. With the current tests usingSingleTaskGP
, the tests (pytest -ra
) take 75 seconds on my desktop, of which 15 seconds is for testingevaluate_kg
only. This should be improved.cc @Balandat, @danielrjiang