-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solve Mathematical Program in Parallel #19119
Comments
I support the pickling of Edit after discussion with Alex: Not reasonable for plant-dependent |
To those who come across this issue in the future, note that the current status is that many of the constraints in This makes pickling a It may be possible to safely parallelize a convex |
@jwnimmer-tri -- could I ask you to weigh in here about whether it is currently safe to parallelize convex optimization solves? In #21770, it seems like you were ok with it (using the one-prog-per-core model)? |
Let me check that we're all talking about the same thing. The proposal here is for a new C++ function (also bound in pydrake) something like this? /** Solves prog[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given.
Uses at most parallelism cores, with static scheduling by default. */
std::vector<MathematicalProgramResult> SolveParallel(
const std::vector<const MathematicalProgram*>& prog,
const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
const std::vector<const SolverOptions*>* solver_options = nullptr,
const Parallelism parallelism = Parallelism::Max(),
bool dynamic_schedule = false); Architecturally, I think this fine and good. As to the thread-safety and correctness, that's a question to be judged of the implementation not the interface. |
Steps to implement this are
|
Per @jwnimmer-tri's suggestion for a function.
I believe what makes the most sense is to have this function in
Additionally, a very common way to manually specify the desired solver (in Python) is some variant of the code
To avoid having to specify the solver
|
How about a passing A revised proposal, for a single function (in the namespace, not a method). /** Solves prog[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given.
If `solver_id` is given, then all programs will be solved using instances of that solver, instead
of choosing the best solver based on each program one by one.
Uses at most parallelism cores, with static scheduling by default. */
std::vector<MathematicalProgramResult> SolveParallel(
const std::vector<const MathematicalProgram*>& prog,
const std::vector<const Eigen::VectorXd*>* initial_guess = nullptr,
const std::vector<const SolverOptions*>* solver_options = nullptr,
const std::optional<SolverId>& solver_id = std::nullopt,
const Parallelism parallelism = Parallelism::Max(),
bool dynamic_schedule = false); |
Is specifying a The only thing I dislike in this is the difference in spelling from the vanilla |
I suppose the issue with:
is the ambiguity of the call On the other hand, if I use |
Passing The challenge is that we somehow need to obtain one solver per thread, because Fixing those problems is all possible (and we could choose to do it), but since this is just a sugar function, restricting it to only be usable from Drake-implemented solvers seemed like a plausible trade-off to me.
We probably don't want a full vector of solvers. If the user has 1000 programs and 4 cores, creating 1000 solvers is somewhat wasteful. It does provide the option to use a different solver for each program, but that does not seem like a common use case to me? This function is sugar, so I think we should only aim to capture the common use cases, and that doesn't seem like one of them. |
I have a PR up at #21957. I changed the function signature a bit more to avoid constructing too many solver objects. |
Is your feature request related to a problem? Please describe.
There are many uses cases for solving multiple mathematical programs in parallel. Examples include starting trajectory optimization from multiple initial guesses, certifying that no collision will occur in a region of C-Space, and parallelizing the solution to quasi-convex optimization programs.
Currently this is possible in C++, albeit it requires writing the parallelized code on your own. Having a method which handles this procedure would be a nice addition for C++ users. For Python users, this is currently not possible due to the GIL and the inability to pickle MathematicalProgram.
Describe the solution you'd like
Provide a parallelized
Solve
function that is bound in Python would provide an elegant solution for both users. This would require resolving #10320.Describe alternatives you've considered
Enabling the pickling of MathematicalProgram so that Python users can use the multiprocessing or similar libraries would allow Python users the ability to get around the GIL. However, this could be quite slow especially for large mathematical programs.
The text was updated successfully, but these errors were encountered: