Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructure dependencies in MadNLP #32

Closed
frapac opened this issue May 6, 2021 · 2 comments · Fixed by #40
Closed

Restructure dependencies in MadNLP #32

frapac opened this issue May 6, 2021 · 2 comments · Fixed by #40
Assignees
Labels
enhancement New feature or request

Comments

@frapac
Copy link
Collaborator

frapac commented May 6, 2021

I think currently MadNLP comes with too many dependencies, which has a significant impact on the precompilation time of the package. Unfortunately, Julia does not support properly conditional dependencies (see the long awaited issue JuliaLang/Pkg.jl#1285). But, since 2020, we can define several packages with different UUID under the same git repo (see this discussion, or this PR for a practical example).

I would suggest to refactor MadNLP the following way, by keeping all the code in this repo.

  • MadNLP/src would implement only the core interior point algorithm, interface to usual linear solvers (as LapackCPU, or IterativeSolvers) and the lightweights interface (NLPModels and MathOptInterface).
  • we would move in a new directory MadNLP/lib all "complicated" deps. In MadNLP/lib, each subfolder would be registered as Julia package with its own UUID in Julia's registry.

For instance, we could structure MadNLP/lib as

lib/MadNLPPlasmo    # interface to Plasmo
lib/MadNLPCUDA      # interface to CUDA's Lapack
lib/MadNLPHSL         # interface to HSL
lib/MadNLPMumps    # interface to MUMPS
lib/MadNLPPardiso    # interface to Pardiso 

Leading to 5 subpackages ... For the three linear solvers HSL, Mumps and Pardiso, I am wondering if we could build something in common with JuliaSmoothOptimizers, to adopt a common interface for the linear solvers we are using. I have a look at HLS.jl, MUMPS.jl but these two packages implement only a bare-bone wrapper to the C code. I think medium term, it would be nice to share the same interface :

abstract type AbstractLinearSolver end

# dummy functions
introduce(::EmptyLinearSolver) = ""
factorize!(M::EmptyLinearSolver) = M
solve!(::EmptyLinearSolver,x) = x
is_inertia(::EmptyLinearSolver) = false
inertia(::EmptyLinearSolver) = (0,0,0)
improve!(::EmptyLinearSolver) = false
rescale!(::AbstractLinearSystemScaler) = nothing
solve_refine!(y,::AbstractIterator,x) = nothing
@sshin23
Copy link
Member

sshin23 commented May 6, 2021

Thanks for working on this @frapac!

This looks like a perfect resolution for the dependency diet.

Regarding integration with HSL.jl and MUMPS.jl, I believe we will need extra wrappers on MadNLP side anyway. I think one way to clean things up a little more would be removing the bare-bone wrapper part for HSL and MUMPS from MadNLP.jl and make it depend on HSL.jl and MUMPS.jl.

@frapac
Copy link
Collaborator Author

frapac commented May 11, 2021

Good catch. I have engaged a discussion here: JuliaSmoothOptimizers/Organization#19 (reply in thread)

In my opinion, JuliaSmoothOptimizers is the right organization to host the interfaces to MUMPS and HSL, as I think it will benefit to the community.

A nice output would be to develop a common interface to sparse linear solvers with them: that would allow using all the linear solvers developed inside JSO directly inside MadNLP.

@sshin23 sshin23 self-assigned this Jun 18, 2021
@sshin23 sshin23 added the enhancement New feature or request label Jun 18, 2021
@sshin23 sshin23 linked a pull request Jun 18, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants