Skip to content

Constraint Handling

Hao Wang edited this page Oct 6, 2020 · 3 revisions

Handling the Constraints schermafbeelding 2017-09-27 om 23 08 12

It is common to consider a constrained optimization task, where the search procedure has to respect the equality and inequality conditions, namely, h(x) = 0 & g(x) <= 0. The optimization constraints are handled through imposing a dynamic penalty value to the acquisition function when being optimized to propose new candidate solutions. It is straightforward to implement your own equality or inequality constraints.

Input argument to the constraint function: the equality h and inequality g function work just like the objective function obj_fun. They could be defined to take as input a list or a dictionary and this behavior should be informed to the optimizer via setting the eval_type argument of the BO class.

Equality Constraints

For equality constraints, the function should only return 0 when the condition is satisfied and it is generally recommended to return a large value if the degree of constraint violations is higher, for instance, in the example below, we would like to restrict the search on a hyperplane x_1 + x_2 + ... + x_n = 1:

def h(x):
    return np.abs(np.sum(x) - 1)

If you have more than one constraint, please return a vector containing the value from each constraint from the constraint function.

Inequality Constraints

For inequality constraints, the function should only return a non-positive value when the condition is satisfied, and similarly, it is recommended to return a large value if the degree of constraint violations is higher, for instance, in the example below, we would like to restrict the search on one half of the search space, separated by the condition x_1 + x_2 + ... + x_n - 1 <= 0:

def g(x):
    return np.sum(x) - 1

The constraint functions h and g should be passed to the constructor of BO class (and its subclasses) or to the fmin interface.