Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding constraints to Multi-fidelity BO with discrete fidielities #1100

Closed
shang-zhu opened this issue Feb 26, 2022 · 3 comments
Closed

Adding constraints to Multi-fidelity BO with discrete fidielities #1100

shang-zhu opened this issue Feb 26, 2022 · 3 comments

Comments

@shang-zhu
Copy link

shang-zhu commented Feb 26, 2022

Issue description

I was following the page of Multi-fidelity BO with discrete fidelities: https://botorch.org/tutorials/discrete_multi_fidelity_bo, and try to add inequality parameter constraints to the optimizer. I only modified the optimize_acqf_mixed() and optimize_acqf() functions as below, where I tried to implement the constraint 'x[5]<0.2'. But the code ran forever and failed to output any candidates or observations. It was working well without the two constraints. Did I mess up anything here? Thank you.

Code example

def get_mfkg(model):
    curr_val_acqf = FixedFeatureAcquisitionFunction(
        acq_function=PosteriorMean(model),
        d=7,
        columns=[6],
        values=[1],
    )
    
    _, current_value = optimize_acqf(
        acq_function=curr_val_acqf,
        bounds=bounds[:,:-1],
        q=1,
        num_restarts=10 if not SMOKE_TEST else 2,
        raw_samples=1024 if not SMOKE_TEST else 4,
        options={"batch_limit": 10, "maxiter": 200},
        inequality_constraints=[(torch.tensor([5]),torch.tensor([-1]),-0.2)],
    )
        
    return qMultiFidelityKnowledgeGradient(
        model=model,
        num_fantasies=128 if not SMOKE_TEST else 2,
        current_value=current_value,
        cost_aware_utility=cost_aware_utility,
        project=project,
    )
def optimize_mfkg_and_get_observation(mfkg_acqf):
    """Optimizes MFKG and returns a new candidate, observation, and cost."""
    
    X_init = gen_one_shot_kg_initial_conditions(
        acq_function = mfkg_acqf,
        bounds=bounds,
        q=4,
        num_restarts=10,
        raw_samples=512,
    )
    candidates, _ = optimize_acqf_mixed(
        acq_function=mfkg_acqf,
        bounds=bounds,
        fixed_features_list=[{6: 0.0}, {6: 1.0}], 
        q=1,
        num_restarts=NUM_RESTARTS,
        raw_samples=RAW_SAMPLES,
        batch_initial_conditions=X_init,
        options={"batch_limit": 5, "maxiter": 200},
        inequality_constraints=[(torch.tensor([5]),torch.tensor([-1]),-0.2)],
    )
    cost = cost_model(candidates).sum()
    new_x = candidates.detach()
    new_obj = problem(new_x).unsqueeze(-1)
    print(f"candidates:\n{new_x}\n")
    print(f"observations:\n{new_obj}\n\n")
    return new_x, new_obj, cost

note that the discrete fidelities were slightly modified from the original page.

System Info

Please provide information about your setup, including

  • BoTorch Version 0.6.0
  • GPyTorch Version 1.6.0
  • PyTorch Version 1.10.0
  • mac OS Mojave Version 10.14.6
@Balandat
Copy link
Contributor

Hmm since this is KG it is a high dim optimization problem so it might be that the SLSQP solver is just too slow here. But your constraint is a simple bound constraint so you can just edit the bounds you pass to the optimization instead of adding this as an inequality constraint.

@shang-zhu
Copy link
Author

I see. I actually have a more complicated constraint: x1+x2<1, the above is more of a test for the constraint capability. Do you think it might not be feasible to optimize with KG over this constraint? If so, is there any better way to incorporate constraints for multi-fidelity BO?

@saitcakmak
Copy link
Contributor

I wonder if this is related to #938. Though, as I found out in #939, that should not be an issue with qMFKG.

I think the real issue is SLSQP as @Balandat pointed out above. Here's an old comment that talks about how SLSQP tries to allocate 128gb memory when optimizing qKG. The reason for this is the time and space complexity of SLSQP is cubic in the effective dimension of the solution tensor, which in the case of qKG / qMFKG is batch_limit * (q + num_fantasies) * d. Luckily, one of these is very easy to control. I'd replace the batch_limit you pass into optimize_acqf with 1 and see if that solves the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants