Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make get_infeasible_cost return a cost value for each outcome. #1191

Closed
wants to merge 1 commit into from

Conversation

saitcakmak
Copy link
Contributor

Summary: Current implementation returns a single M value. This modifies it to return an m-dim tensor of infeasible cost values, each corresponding to one of the m outcomes.

Differential Revision: D35847505

Summary: Current implementation returns a single `M` value. This modifies it to return an `m`-dim tensor of infeasible cost values, each corresponding to one of the `m` outcomes.

Differential Revision: D35847505

fbshipit-source-id: 6617a9285b61f57c9d90cbf59cfae3c9c56296c2
@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels Apr 23, 2022
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35847505

@codecov
Copy link

codecov bot commented Apr 23, 2022

Codecov Report

Merging #1191 (ef651bb) into main (92112b7) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1191   +/-   ##
=======================================
  Coverage   99.99%   99.99%           
=======================================
  Files         115      115           
  Lines       10134    10137    +3     
=======================================
+ Hits        10133    10136    +3     
  Misses          1        1           
Impacted Files Coverage Δ
botorch/acquisition/objective.py 100.00% <100.00%> (ø)
botorch/acquisition/utils.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 92112b7...ef651bb. Read the comment docs.

lb = objective(posterior.mean - 6 * posterior.variance.clamp_min(0).sqrt())
if lb.ndim < posterior.mean.ndim:
lb = lb.unsqueeze(-1)
# Take outcome-wise min. Looping in to handle batched models.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So If I read this correctly, this computes the minimum across all batch dimensions? Is that what we want here? The previous implementation also did this so it's probably fine to get this in as is, but we may want to think about this some more if the batch dims correspond to different data and we want to extract them individually (this may be relevant for fully bayesian models - @dme65).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that is an interesting thought. I guess it would depend on how different we expect each model in the batch to be. Since they're all trained on the same set of inputs, I'd presume they'd be relatively similar, so one cost value should work for all. This would also affect the fantasized models, so maybe if we have two fantasy models, one on a large fantasy observation and the other on a small one, the cost value for one may be too conservative for the other? One could also argue that the 6 sigma we use here is too conservative to begin with. Anyways, I'm happy to revisit this later if we think it is worth exploring in more detail.

@@ -320,7 +320,7 @@ def test_constrained_mc_objective(self):
obj=constrained_obj,
constraints=[feasible_con, infeasible_con],
samples=samples,
infeasible_cost=0.0,
infeasible_cost=torch.tensor([0.0], device=self.device, dtype=dtype),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're accepting Union[Tensor, float] - should we test support both input types?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are something like 6 tests in there. I only changed two of them to use tensors, so both float and tensor are being tested.

facebook-github-bot pushed a commit that referenced this pull request Nov 11, 2022
Summary:
X-link: facebook/Ax#1191

Pull Request resolved: #1439

This diff acts as follow-up to the recent model fitting refactor. The previous update focused on the high-level logic used to determine which fitting routines to use for which MLLs. This diff refactors the internal machinery used to evaluate forward-backward passes (producing losses and gradients, respectively) during optimization.

The solution we have opted for is to abstract away the evaluation process by relying on closures. In most cases, these closures are automatically constructed by composing simpler, multiply-dispatched base functions.

Reviewed By: Balandat

Differential Revision: D39101211

fbshipit-source-id: c2058a387fd74058073cfe73c9404d2df2f9b55a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants