Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sampling under nonlinear equality constraints #314

Closed
Osburg opened this issue Nov 29, 2023 · 4 comments · Fixed by #328
Closed

Sampling under nonlinear equality constraints #314

Osburg opened this issue Nov 29, 2023 · 4 comments · Fixed by #328
Assignees

Comments

@Osburg
Copy link
Collaborator

Osburg commented Nov 29, 2023

If I have not overlooked something we currently do not have a sampler for nonlinear equality constraints. As @simonsung06 pointed out in #307, the OptimalityCriterionEnum.SPACE_FILLING objective only depends on the domain, so I had the idea that we could possibly use this objective to add a sampler supporting a wide range of constraints, including nonlinear equality constraints (and batch constraints as soon as they are supported in doe). Warning: only naive ideas and handwaving arguments are following.
The general idea is to call find_local_max_ipopt with space filling objective on the domain in the _ask method. This will create a number of points which are "uniformly" distributed over the feasible set, while all constraints that find_local_max_ipopt can handle are supported. These points are treated as candidates for the sampling. From these candidates we draw a certain percentage to be returned by _ask. Intuitively, the returned candidates should represent a somewhat uniform distribution on the feasible set. The last step is intended to avoid sampling preferably in the boundary region of the feasible set: since the space filling objective will always lead to at least some points lying at the boundary of the feasible set, returning all candidates would destroy the "uniformity" of the distribution. Instead, we create a carpet of candidates populating the feasible set and then randomly pick some of them.
For domains of high input dimension this will certainly not be a good solution to create samples in reasonable time, but in lower dimensional settings it could be an option. In the small example below we sample 50 points from a spherical surface, I hope this illustrates a little bit what the general idea is. Yellow points are the "carpet of candidates", the black points are a random sample drawn from the candidate set.

@jduerholt (or anyone else interested) What do you think about the idea, does it make any sense to you? Can we even make use of such a sampling method? If you think this could be a good idea feel free to assign this issue to me, then I will take care of it somewhen these days.

domain = Domain(
    inputs=[
        ContinuousInput(key="x1", bounds = (-1,1)),
        ContinuousInput(key="x2", bounds = (-1,1)),
        ContinuousInput(key="x3", bounds = (-1,1))],
    outputs=[ContinuousOutput(key="y")],
    constraints=[NonlinearEqualityConstraint(expression="x1**2 + x2**2 + x3**2 - 1", features=["x1","x2","x3"])],
)

result = find_local_max_ipopt(domain, "linear", objective=OptimalityCriterionEnum.SPACE_FILLING, n_experiments=100, ipopt_options={"maxiter": 100, "disp":3, "linear_solver":"ma57", "hsllib":"libcoinhsl.dylib", "timing_statistics":"yes"})
result.round(3)

def plot_results_3d(result, surface_func):
    u, v = np.mgrid[0 : 2 * np.pi : 100j, 0 : np.pi : 80j]
    X = np.cos(u) * np.sin(v)
    Y = np.sin(u) * np.sin(v)
    Z = surface_func(X, Y)

    fig = plt.figure(figsize=(8, 8))
    ax = fig.add_subplot(111, projection="3d")
    ax.plot_surface(X, Y, Z, alpha=0.1, color="blue")
    ax.plot_surface(X, Y, -Z, alpha=0.1, color="blue")
    ax.scatter(
        xs=result["x1"],
        ys=result["x2"],
        zs=result["x3"],
        marker="o",
        s=40,
        color="orange",
    )
    ax.set(xlabel="x1", ylabel="x2", zlabel="x3")
    ax.xaxis.set_major_formatter(FormatStrFormatter('%.2f'))
    ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))

    return ax

ax = plot_results_3d(result, surface_func=lambda x1, x2: np.sqrt(1 - (x1**2 + x2**2)))

sample = result[np.random.rand(len(result)) < 0.5]
ax.scatter(
    xs=sample["x1"],
    ys=sample["x2"],
    zs=sample["x3"],
    marker="o",
    s=40,
    color="black",
)
Total number of variables............................:      300
                     variables with only lower bounds:        0
                variables with lower and upper bounds:      300
                     variables with only upper bounds:        0
Total number of equality constraints.................:      100
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0


Number of Iterations....: 100

                                   (scaled)                 (unscaled)
Objective...............:  -3.5717406289388549e+01   -3.5717406289388549e+01
Dual infeasibility......:   1.8847170877068415e+00    1.8847170877068415e+00
Constraint violation....:   1.0327337682802806e-06    1.0327337682802806e-06
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   1.0018139600569745e-11    1.0018139600569745e-11
Overall NLP error.......:   1.8847170877068415e+00    1.8847170877068415e+00


Number of objective function evaluations             = 242
Number of objective gradient evaluations             = 101
Number of equality constraint evaluations            = 242
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 101
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 0
Total seconds in IPOPT (w/o function evaluations)    =      0.206
Total seconds in NLP function evaluations            =      0.432

image

@Osburg Osburg changed the title Sampling under nonlinear constraints Sampling under nonlinear equality constraints Nov 29, 2023
@Osburg Osburg changed the title Sampling under nonlinear equality constraints Sampling with nonlinear equality constraints Nov 29, 2023
@Osburg Osburg changed the title Sampling with nonlinear equality constraints Sampling under nonlinear equality constraints Nov 29, 2023
@jduerholt
Copy link
Contributor

Hi @Osburg,

I am not 100% sure, that I understood it, so I try to give it with my words:

So you want to come up with a new strategy in BoFire which could be called for example UniversalSampler. This new sampler uses then find_local_max_ipopt and the space filling criterion to come up with new samples. Correct?

This would then also resolve the point from @simonsung06 that the space filling criterion would be somehow incompliant with the DoEStrategy. Correct?

The new Strategy would then somehow look like this:

class UniversalSampler(Sampler):
    type: Literal['UniversalSampler'] = 'UniversalSampler'
    base_samples: int
    ipopt_options: Dict[str, str]

If I understand this correct, then I like the idea!

Best,

Johannes

@Osburg
Copy link
Collaborator Author

Osburg commented Nov 30, 2023

Hi Johannes :),

yes, this is exactly what I was talking about. Towards @simonsung06's point in #307 : I think this one would remain since his argument why DoEStrategy should have an optional argument for the optimality criterion ist still valid, right?

Cheers
Aaron

@jduerholt
Copy link
Contributor

Hi Aaron,

I agree that the DoEStrategy would still have an optional argument for the optimality criterion, but we would not allow for space filling criterion there. Makes sense?

Best,

Johannes

@Osburg
Copy link
Collaborator Author

Osburg commented Nov 30, 2023

Ah, yes it does! Then let's do it like this. You can assign this issue to me if you like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants