Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solve Mathematical Program in Parallel #19119

Closed
AlexandreAmice opened this issue Apr 2, 2023 · 11 comments · Fixed by #21957
Closed

Solve Mathematical Program in Parallel #19119

AlexandreAmice opened this issue Apr 2, 2023 · 11 comments · Fixed by #21957
Assignees
Labels
component: mathematical program Formulating and solving mathematical programs; our autodiff and symbolic libraries type: feature request

Comments

@AlexandreAmice
Copy link
Contributor

AlexandreAmice commented Apr 2, 2023

Is your feature request related to a problem? Please describe.
There are many uses cases for solving multiple mathematical programs in parallel. Examples include starting trajectory optimization from multiple initial guesses, certifying that no collision will occur in a region of C-Space, and parallelizing the solution to quasi-convex optimization programs.

Currently this is possible in C++, albeit it requires writing the parallelized code on your own. Having a method which handles this procedure would be a nice addition for C++ users. For Python users, this is currently not possible due to the GIL and the inability to pickle MathematicalProgram.

Describe the solution you'd like
Provide a parallelized Solve function that is bound in Python would provide an elegant solution for both users. This would require resolving #10320.

Describe alternatives you've considered
Enabling the pickling of MathematicalProgram so that Python users can use the multiprocessing or similar libraries would allow Python users the ability to get around the GIL. However, this could be quite slow especially for large mathematical programs.

@jwnimmer-tri jwnimmer-tri added the component: mathematical program Formulating and solving mathematical programs; our autodiff and symbolic libraries label Apr 4, 2023
@nepfaff
Copy link
Member

nepfaff commented Feb 1, 2024

I support the pickling of MathematicalProgram idea for Python parallelism. In particular, I'm interested in evaluating AugmentedLagrangianNonsmooth in parallel in Python (use with Nevergrad black-box optimization). Pickling AugmentedLagrangianNonsmooth requires pickling MathematicalProgram.

Edit after discussion with Alex: Not reasonable for plant-dependent MathematicalPrograms...

@AlexandreAmice
Copy link
Contributor Author

To those who come across this issue in the future, note that the current status is that many of the constraints in MathematicalProgram are not thread safe. Examples of these are all the constraints which interact with multibody plant and context.

This makes pickling a MathematicalProgram essentially impossible as it would require serializing the entire systems framework.

It may be possible to safely parallelize a convex MathematicalProgram

@RussTedrake
Copy link
Contributor

@jwnimmer-tri -- could I ask you to weigh in here about whether it is currently safe to parallelize convex optimization solves? In #21770, it seems like you were ok with it (using the one-prog-per-core model)?

@jwnimmer-tri
Copy link
Collaborator

Let me check that we're all talking about the same thing.

The proposal here is for a new C++ function (also bound in pydrake) something like this?

/** Solves prog[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given.
Uses at most parallelism cores, with static scheduling by default. */
std::vector<MathematicalProgramResult> SolveParallel(
    const std::vector<const MathematicalProgram*>& prog,
    const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
    const std::vector<const SolverOptions*>* solver_options = nullptr,
    const Parallelism parallelism = Parallelism::Max(),
    bool dynamic_schedule = false);

Architecturally, I think this fine and good.

As to the thread-safety and correctness, that's a question to be judged of the implementation not the interface.

@AlexandreAmice
Copy link
Contributor Author

Steps to implement this are

  1. Land Solver common option for max number of threads #21857
  2. Add the member thead_safe to Constraint
  3. Add the member thread_safe to MathematicalProgram
  4. Implement SolveParallel

@AlexandreAmice
Copy link
Contributor Author

Per @jwnimmer-tri's suggestion for a function.

/** Solves prog[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given.
Uses at most parallelism cores, with static scheduling by default. */
std::vector<MathematicalProgramResult> SolveParallel(
    const std::vector<const MathematicalProgram*>& prog,
    const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
    const std::vector<const SolverOptions*>* solver_options = nullptr,
    const Parallelism parallelism = Parallelism::Max(),
    bool dynamic_schedule = false);

I believe what makes the most sense is to have this function in solve.h/cc with this signature:


/**
 * Solves progs[i] into result[i], optionally using initial_guess[i] and
 * solver_options[i] if given, by invoking solvers[i] if provided.  If
 * solvers[i] is nullptr then the best available solver is constructed for
 * progs[i] individually depending on the availability of the solver and the
 * problem formulation. If solvers == nullptr then this is done for every
 * progs[i]. Uses at most parallelism cores, with dynamic scheduling by default.
 *
 * @note only programs which are thread safe are solved concurrently. Programs
 * which are not thread safe will be solved sequentially in a thread safe
 * manner.
 *
 * @throws if initial guess and solver options are provided and not the same
 * size as progs.
 *
 * @throws if any of the progs are nullptr.
 *
 * @throws if solvers[i] cannot solve progs[i].
 */
std::vector<MathematicalProgramResult> SolveInParallel(
    const std::vector<const MathematicalProgram*>& progs,
    const std::vector<const Eigen::VectorXd*>* initial_guesses = nullptr,
    const std::vector<const SolverOptions*>* solver_options = nullptr,
    const std::vector<const SolverInterface*>* solvers = nullptr,
    const Parallelism parallelism = Parallelism::Max(),
    bool dynamic_schedule = true);

Additionally, a very common way to manually specify the desired solver (in Python) is some variant of the code

solver = MosekSolver()
solver.Solve(prog)

To avoid having to specify the solver n time if all n programs are the same type (which is often the case), I propose adding a function SolveInParallel to either SolverInterface or SolverBase. This function would have the signature:

/**
 * Solves progs[i] into result[i], optionally using initial_guess[i] and
 * solver_options[i] if given, by invoking thie solver's Solve method.
 * Uses at most parallelism cores, with dynamic scheduling by default.
 *
 * @note only programs which are thread safe are solved concurrently. Programs
 * which are not thread safe will be solved sequentially in a thread safe
 * manner.
 *
 * @throws if initial guess and solver options are provided and not the same
 * size as progs.
 *
 * @throws if any of the progs are nullptr.
 *
 * @throws if this solver cannot solve progs[i].
 */
std::vector<MathematicalProgramResult> SolverInterface::SolveInParallel(
    const std::vector<const MathematicalProgram*>& progs,
    const std::vector<const Eigen::VectorXd*>* initial_guesses,
    const std::vector<const SolverOptions*>* solver_options,
    const int parallelism, bool dynamic_schedule)

@jwnimmer-tri
Copy link
Collaborator

To avoid having to specify the solver n time if all n programs are the same type ...

How about a passing SolverId in that case?

A revised proposal, for a single function (in the namespace, not a method).

/** Solves prog[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given.
If `solver_id` is given, then all programs will be solved using instances of that solver, instead
of choosing the best solver based on each program one by one.
Uses at most parallelism cores, with static scheduling by default. */
std::vector<MathematicalProgramResult> SolveParallel(
    const std::vector<const MathematicalProgram*>& prog,
    const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
    const std::vector<const SolverOptions*>* solver_options = nullptr,
    const std::optional<SolverId>& solver_id = std::nullopt,
    const Parallelism parallelism = Parallelism::Max(),
    bool dynamic_schedule = false);

@AlexandreAmice
Copy link
Contributor Author

Is specifying a SolverId strictly better than specifying a SolverInterface?

The only thing I dislike in this is the difference in spelling from the vanilla Solve method. Traditionally Solve(prog) is used when we just want to pick the best solver, while solver.Solve(prog) is used when you wish to specify a solver. One could reconcile this by creating a new method Solve(prog, nullopt, nullopt, solver) so that there is more of a similarity in spelling.

@AlexandreAmice
Copy link
Contributor Author

AlexandreAmice commented Sep 25, 2024

I suppose the issue with:

/** Solves prog[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given.
If `solver_id` is given, then all programs will be solved using instances of that solver, instead
of choosing the best solver based on each program one by one.
Uses at most parallelism cores, with static scheduling by default. */
std::vector<MathematicalProgramResult> SolveParallel(
    const std::vector<const MathematicalProgram*>& prog,
    const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
    const std::vector<const SolverOptions*>* solver_options = nullptr,
    const SolverInterface* solver = std::nullptr,
    const Parallelism parallelism = Parallelism::Max(),
    bool dynamic_schedule = false);
std::vector<MathematicalProgramResult> SolveParallel(
    const std::vector<const MathematicalProgram*>& prog,
    const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
    const std::vector<const SolverOptions*>* solver_options = nullptr,
    const std::vector<const SolverInterface>* solvers = std::nullptr,
    const Parallelism parallelism = Parallelism::Max(),
    bool dynamic_schedule = false);

is the ambiguity of the call
SolveInParallel(prog). This doesn't even get resolved if we call SolveInParallel(prog, nullptr, nullptr, nullptr).

On the other hand, if I use std::optional<SolverId>& to pass the solver, then users cannot use a downstream SolverInterface since MakeSolver won't know about the solver.

@jwnimmer-tri
Copy link
Collaborator

Is specifying a SolverId strictly better than specifying a SolverInterface?

Passing const SolverInteface* solver = nullptr would be ideal, yes -- by passing the interface instead of the id, users who have non-Drake implementations of solvers (i.e., in a different repo and build) could still use this sugar function.

The challenge is that we somehow need to obtain one solver per thread, because SolverInterface is not (promised to be) threadsafe. The solvers don't (yet?) have a Clone() function, so we have no way to go from 1 solver to N solvers.

Fixing those problems is all possible (and we could choose to do it), but since this is just a sugar function, restricting it to only be usable from Drake-implemented solvers seemed like a plausible trade-off to me.

const std::vector<const SolverInterface*>* solvers = std::nullptr,

We probably don't want a full vector of solvers. If the user has 1000 programs and 4 cores, creating 1000 solvers is somewhat wasteful. It does provide the option to use a different solver for each program, but that does not seem like a common use case to me?

This function is sugar, so I think we should only aim to capture the common use cases, and that doesn't seem like one of them.

@AlexandreAmice
Copy link
Contributor Author

I have a PR up at #21957. I changed the function signature a bit more to avoid constructing too many solver objects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: mathematical program Formulating and solving mathematical programs; our autodiff and symbolic libraries type: feature request
Projects
Development

Successfully merging a pull request may close this issue.

5 participants