Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Callbacks for linear programming solvers #911

Closed
mckib2 opened this issue Jul 11, 2022 · 6 comments
Closed

Callbacks for linear programming solvers #911

mckib2 opened this issue Jul 11, 2022 · 6 comments

Comments

@mckib2
Copy link
Contributor

mckib2 commented Jul 11, 2022

SciPy is now using the HiGHS solvers by default and planning to deprecate its legacy simplex and interior point solvers. A feature of the old solvers that would be missed is the ability to run a user-provided callback each iteration to inspect the current solution. Is it currently possible or might it be possible to provide such a callback function to the HiGHS solvers?

@odow
Copy link
Collaborator

odow commented Jul 12, 2022

x-ref #781 #896

Are there any other solvers that support callbacks on a per-iteration basis for simplex and interior point? That seems like it'd cause a massive slow-down.

@jajhall
Copy link
Member

jajhall commented Jul 12, 2022

We will be introducing a system of callbacks into HiGHS, but not on a per-iteration basis for simplex at least.

@jajhall jajhall closed this as completed Jul 12, 2022
@mckib2
Copy link
Contributor Author

mckib2 commented Jul 12, 2022

Per-iteration is not necessarily needed, but some regularly called mechanism would be nice.

Are there any other solvers that support callbacks on a per-iteration basis for simplex and interior point?

SciPy's legacy solvers are the only that come to mind. The only reason we can come up with that they could be used for are alternative visualization or user-defined progress printing. It wouldn't be the end of the world if we deprecated the callback mechanism completely, but we try to do our best not break downstream projects when possible

@jajhall
Copy link
Member

jajhall commented Jul 12, 2022

I can see that you don't want to do this, but a "per iteration" callback would be a performance hit that we can't accept. At present there is a logging call - that is quiet by default - every time the basis matrix is refactorized. We could consider putting a callback there. A callback in every IPM iteration would also be reasonable.

@HaoZeke
Copy link
Contributor

HaoZeke commented Oct 29, 2023

SciPy callbacks

Some comments on additions required for scipy. In some sense the callbacks in highs are more functional than those of scipy. scipy only needs to have the OptimizeResult available for consumption by the callback, that is, it is essentially a logging callback.

In general, an OptimizeResult has:

    x : ndarray
        The solution of the optimization.
    success : bool
        Whether or not the optimizer exited successfully.
    status : int
        Termination status of the optimizer. Its value depends on the
        underlying solver. Refer to `message` for details.
    message : str
        Description of the cause of the termination.
    fun, jac, hess: ndarray
        Values of objective function, its Jacobian and its Hessian (if
        available). The Hessians may be approximations, see the documentation
        of the function in question.
    hess_inv : object
        Inverse of the objective function's Hessian; may be an approximation.
        Not available for all solvers. The type of this attribute may be
        either np.ndarray or scipy.sparse.linalg.LinearOperator.
    nfev, njev, nhev : int
        Number of evaluations of the objective functions and of its
        Jacobian and Hessian.
    nit : int
        Number of iterations performed by the optimizer.
    maxcv : float
        The maximum constraint violation.

As will be seen however, not all of these are set (both in the documentation and implementation).

simplex callbacks

  • Are called at each solver step
    callback : callable, optional
        If a callback function is provided, it will be called within each
        iteration of the algorithm. The callback must accept a
        `scipy.optimize.OptimizeResult` consisting of the following fields:

            x : 1-D array
                Current solution vector
            fun : float
                Current value of the objective function
            success : bool
                True only when a phase has completed successfully. This
                will be False for most iterations.
            slack : 1-D array
                The values of the slack variables. Each slack variable
                corresponds to an inequality constraint. If the slack is zero,
                the corresponding constraint is active.
            con : 1-D array
                The (nominally zero) residuals of the equality constraints,
                that is, ``b - A_eq @ x``
            phase : int
                The phase of the optimization being executed. In phase 1 a basic
                feasible solution is sought and the T has an additional row
                representing an alternate objective function.
            status : int
                An integer representing the exit status of the optimization::

                     0 : Optimization terminated successfully
                     1 : Iteration limit reached
                     2 : Problem appears to be infeasible
                     3 : Problem appears to be unbounded
                     4 : Serious numerical difficulties encountered

            nit : int
                The number of iterations performed.
            message : str
                A string descriptor of the exit status of the optimization.

In the implementation, we see:

            solution[:] = 0
            solution[basis[:n]] = T[:n, -1]
            x = solution[:m]
            x, fun, slack, con = _postsolve(
                x, postsolve_args
            )
            res = OptimizeResult({
                'x': x,
                'fun': fun,
                'slack': slack,
                'con': con,
                'status': status,
                'message': message,
                'nit': nit,
                'success': status == 0 and complete,
                'phase': phase,
                'complete': complete,
                })
            callback(res)

revised_simplex

As seen in _linprog_rs.py

    callback : callable, optional
        If a callback function is provided, it will be called within each
        iteration of the algorithm. The callback function must accept a single
        `scipy.optimize.OptimizeResult` consisting of the following fields:

            x : 1-D array
                Current solution vector.
            fun : float
                Current value of the objective function ``c @ x``.
            success : bool
                True only when an algorithm has completed successfully,
                so this is always False as the callback function is called
                only while the algorithm is still iterating.
            slack : 1-D array
                The values of the slack variables. Each slack variable
                corresponds to an inequality constraint. If the slack is zero,
                the corresponding constraint is active.
            con : 1-D array
                The (nominally zero) residuals of the equality constraints,
                that is, ``b - A_eq @ x``.
            phase : int
                The phase of the algorithm being executed.
            status : int
                For revised simplex, this is always 0 because if a different
                status is detected, the algorithm terminates.
            nit : int
                The number of iterations performed.
            message : str
                A string descriptor of the exit status of the optimization.

The implementation:

        res = OptimizeResult(
            {
                "x": x_o,
                "fun": fun,
                "slack": slack,
                "con": con,
                "nit": iteration,
                "phase": phase,
                "complete": False,
                "status": status,
                "message": "",
                "success": False,
            }
        )
        callback(res)

interior-point

    callback : callable, optional
        If a callback function is provided, it will be called within each
        iteration of the algorithm. The callback function must accept a single
        `scipy.optimize.OptimizeResult` consisting of the following fields:

            x : 1-D array
                Current solution vector
            fun : float
                Current value of the objective function
            success : bool
                True only when an algorithm has completed successfully,
                so this is always False as the callback function is called
                only while the algorithm is still iterating.
            slack : 1-D array
                The values of the slack variables. Each slack variable
                corresponds to an inequality constraint. If the slack is zero,
                the corresponding constraint is active.
            con : 1-D array
                The (nominally zero) residuals of the equality constraints,
                that is, ``b - A_eq @ x``
            phase : int
                The phase of the algorithm being executed. This is always
                1 for the interior-point method because it has only one phase.
            status : int
                For revised simplex, this is always 0 because if a different
                status is detected, the algorithm terminates.
            nit : int
                The number of iterations performed.
            message : str
                A string descriptor of the exit status of the optimization.

Implementation:

        x_o, fun, slack, con = _postsolve(x / tau, postsolve_args)
        res = OptimizeResult(
            {
                "x": x_o,
                "fun": fun,
                "slack": slack,
                "con": con,
                "nit": iteration,
                "phase": 1,
                "complete": False,
                "status": 0,
                "message": "",
                "success": False,
            }
        )

Mapping to highspy

highspy uses the HighsCallbackDataOut which has:

typedef struct {
  int log_type;  // cast of HighsLogType
  double running_time;
  HighsInt simplex_iteration_count;
  HighsInt ipm_iteration_count;
  double objective_function_value;
  int64_t mip_node_count;
  double mip_primal_bound;
  double mip_dual_bound;
  double mip_gap;
  double* mip_solution;
} HighsCallbackDataOut;

Which means that (bold entries are not present):

  • x : Current solution vector (not just for mip)
  • fun : Already present
  • success : Can probably be set to false without much trouble (e.g. _rs and _ip)
  • slack : The (nominally positive) values of the slack, b_ub - A_ub @ x.
  • con : The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x.
  • phase : ip sets this to 1 always
  • nit : Already present
  • message : Already supported (via the callback interface, not in HighsCallbackDataOut)

@jajhall would it be acceptable to have equivalents in HighsDataOut? To add things which are only ever going to be used for logging seems like a pointless slowdown though.

It might make sense to have a separate LoggingData structure which essentially has what scipy needs. This would build on the existing callback mechanism, not replace or compete with it, and would allow users opting out to not be bothered with it at all.

Frequency

As discussed in #911 (comment), a per-iteration callback would not be feasible, however I was wondering if it might be OK to add (in a separate PR) either a more compatible callback structure (to be used only with a special compile guard, like #ifdef SCIPY_COMPAT.

Alternatives

I am also going open a companion discussion at SciPy to discuss if it is feasible to:

  • Deprecate the current callback interface altogether (as suggested @mckib2)
  • Support the full highspy callback interface (with interrupts and other nice features), but with reduced frequency / reduced data logging

@jajhall
Copy link
Member

jajhall commented Mar 26, 2024

LP and MIP solver callbacks are now implemented

@jajhall jajhall closed this as completed Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants