Implementation roadmap #1

Open
jiahao opened this Issue Oct 29, 2013 · 57 comments

Projects

None yet
Contributor
jiahao commented Oct 29, 2013

A comprehensive listing of methods in the references.
Please remove any methods are impractical/obsolete and add methods worth implementing to the list.

Linear solvers

  • Stationary methods
    • Jacobi (LT)
    • Gauss-Seidel (LT)
    • Successive overrelaxation, SOR (LT)
    • Symmetric SOR (LT)
  • Nonstationary methods
    • conjugate gradients, CG (LT)
    • CG on normal equations (LT) supplanted by LSQR
    • GMRES (LT, Saa)
    • quasi-minimal residual, QMR (LT)
    • BiCGstab(l) (Sle)
    • Chebyshev iteration (LT)

Hermitian eigenproblems

  • Simple eigenpair iterations
    • power iteration (ET)
    • inverse iteration (ET)
    • Rayleigh quotient iteration (ET) (implemented but doesn't work)
    • subspace iteration (ET)
  • Lanczos
    • simple Lanczos (ET)
    • implicitly restarted Lanczos (ET)
    • band Lanczos (ET)
    • Jacobi-Davidson method (ET)
  • LOPCG

Generalized Hermitian eigenproblems

  • Lanczos (ET)
  • Jacobi-Davidson (ET)

SVD

  • Golub-Kahan-Lanczos (ET)

Non-Hermitian eigenproblems

  • Simple eigenpair iterations
    • power iteration (ET)
    • inverse iteration (ET)
    • Rayleigh quotient iteration (ET) (implemented but doesn't work)
    • subspace iteration (ET)
  • Arnoldi (ET)
  • implicitly restarted Arnoldi (ET)
  • block Arnoldi (ET)
  • Lanczos (ET)
  • block Lanczos (ET)
  • band Lanczos (ET)

Complex symmetric eigenproblems

  • Lanczos (ET)
  • Jacobi-Davidson (ET)

Generalized non-Hermitian eigenproblems

  • Jacobi-Davidson (ET)
  • rational Krylov subspace (ET)
  • symmetric indefinite Lanczos (ET)
  • singular matrix pencils (ET) (?)

Preconditioning

  • inexact methods (ET)

References

Contributor
timholy commented Oct 29, 2013

An awesome list. Somewhere on my harddrive I have an lsqr.jl file I could contribute, if interested.

I would say that all the stationary methods are obsolete (they are only useful now as a component of multigrid). CGN (conjugate-gradient on the normal equations) seems hardly used these days; the fact that it squares the condition number makes it useless for large systems. I don't see the point of the simple power method and friends; it seems subsumed by restarted Arnoldi (it is just the special case of restarting every iteration). I don't see the point of implementing non-restarted Arnoldi or Lanczos (these are just the special case of a large restart period).

Owner

Great list. I guess the non-restarted methods are more starting points for the restarted methods.

For the same reason, we only want restarted GMRES (of which there are several variants, if I recall correctly).

Note that I added LOPCG to the list; this is one of the best iterative eigensolvers for Hermitian systems in my experience (and has the added benefit of supporting preconditioning).

Also, the solvers for ordinary eigenproblems are merely special cases of the solvers for generalized eigenproblems, so we shouldn't have to implement both variants in most cases. With a little care in implementation, it should be possible to have negliglble performance penalty in the case where the right-hand side operator is the identity, since I(x) = x is essentially free and doesn't make a copy.

This is an amazing issue ❤️

A bcgstab was posted here: JuliaLang/julia#4723

Owner

We do not know about its license. For starters, the one from templates should be good.

Contributor
jiahao commented Nov 7, 2013

As @stevengj pointed out in the original discussion (JuliaLang/julia#4474), the state of the art for biconjugate gradients is the BiCGSTAB(l) (or ell?) variant. I purposely left the original BiCGSTAB algorithm off the list for that reason.

Contributor
jiahao commented Nov 7, 2013

I spent a few minutes transcribing symbol-for-symbol the BiCGSTAB(ell) algorithm in this gist. Unicode combining diacritics make it laughably easy to do this transcription. You can see clearly where when l=1 this reduces to BiCGSTAB.

Obviously this is subject to testing, etc.

@ghost
ghost commented Dec 5, 2013

A number of comments

  1. Stationary methods are important and should be there. They can be used as smoothers in multigrid as well as
    first line preconditioners for other iterative methods.
  2. Need to add least square solvers, in particular CGLS and LSQR
  3. The solvers I added (bicgstab and gmres) are a direct translation from the book Templates so I do not see any license issue.
  4. I plan to add cgls soon
Owner

@timholy 's https://github.com/timholy/Grid.jl provides restriction and interpolation operators on grids.

Contributor

With regard to the usefulness of CGN and CGS I would point to this paper:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.210.62

Owner
tkelman commented Dec 21, 2013

I'd be interested to see some of the symmetric indefinite methods MINRES, SYMMLQ, MINRES-QLP (http://www.stanford.edu/group/SOL/software.html)

@i2000s i2000s referenced this issue in acroy/Expokit.jl Aug 11, 2014
Open

May various algorithms be in the repo #1

thraen commented Jun 9, 2015

I tried the gauss-seidel function of the package as smoother for multigrid. The implementation given doesn't account for sparse matrices and is thus not usable in such contexts (large, sparse).
Probably all the methods in stationary.jl have this issue.

Contributor

@thraen: You might want to look into the function sor and other iterative solvers that are optimized for sparse matrices in https://github.com/lruthotto/KrylovMethods.jl. I use the sor a lot as a preconditioner and it is reasonably fast. It makes use of the sparse matrix format.

thraen commented Jun 10, 2015

Thanks a lot for pointing that out!

On Wed, Jun 10, 2015 at 8:39 AM, Lars Ruthotto notifications@github.com
wrote:

@thraen https://github.com/thraen: You might want to look into the
function sor and other iterative solvers that are optimized for sparse
matrices in https://github.com/lruthotto/KrylovMethods.jl. I use the sor
a lot as a preconditioner and it is reasonably fast. It makes use of the
sparse matrix format.


Reply to this email directly or view it on GitHub
#1 (comment)
.

Is it worthwhile to add Aitken's delta-squared process into the list?

klkeys commented Mar 18, 2016

I noticed that this package was listed for Julia GSOC 2016. Mike Innes suggested that I pipe up here if I wanted to help. Is anybody here able and willing to serve as a mentor?

Contributor
jiahao commented Mar 19, 2016

@klkeys Thanks for your interest. I'd be thrilled to have someone help out with this package!

One of the biggest challenges is to unify the various methods available in similar packages like https://github.com/lruthotto/KrylovMethods.jl and https://github.com/JuliaOptimizers/Krylov.jl. In some sense, this package has been more experimental in nature whereas the other two have been more focused on efficient implementations that work well for practical applications. Now that we (or at least, I) have some experience in writing these algorithms, I'd be very interested to figure out a common API across all the various solvers available.

cc: @lruthotto @dpo

Contributor

I'd be glad to help work out a common API and a convergence of the packages in terms of efficiency. I'm not sure how much time I'll have to work on it myself during this summer, but I'd be happy to provide input as much as I can. Please keep me posted if you start a discussion somewhere.

dpo commented Mar 20, 2016

I'm very interested in working out a good common API as well. I'm currently quite happy with the API in Krylov.jl but I'm of course open to discussing it. It's very unlikely that I'll have free time this summer to mentor though, but I'm happy to participate in discussions when I can.

zimoun commented May 11, 2016

Will anyone work on Julia GSOC 2016 ?

It should be nice to have a common API to easily switch between the different implementations or to merge them. For example,

  • the preconditioning scheme is different (e.g., M(x) in KrylovMethods.jl and M\x in IterativeSolvers.jl.
  • an abstract layer to manage preconditioners and then implement classic ones (see there e.g., sec. 4.4 Preconditioners and Tab. 4 )
  • write a "bridge" to be able to transparently use PETSc.jl, other GSOC 2016.

cc: @cortner related issue and #71

Member

I think MINRES should be added to this list.

Contributor

@zimoun: We chose to provide the preconditioner as a function handle that computes the inverse, for example, M(x) = M\x to allow as much flexibility to the user as possible. This way you should be able to use routines from PETSc without an issue (you may have to write a wrapper). I'm not sure you need another abstract layer for preconditioners.

Something we could to in KrylovMethods is to allow M being a matrix and then building the function. We do something similar for A already. Do you think that's a good way to go?

@ChrisRackauckas : There is a basic version of minres in KrylovMethods (https://github.com/lruthotto/KrylovMethods.jl/blob/master/src/minres.jl) which might be a good starting point.

cortner commented May 17, 2016 edited

I'm not sure you need another abstract layer for preconditioners.

My own perspective: in optimisation you need more than just M(x). You need to provide M * x, M \ x, <x, My>, <x, M\y>.

But I would argue even in linear algebra you may want to create some abstraction. E.g., is the preconditioner right- or left-preconditioned? Maybe I am overcomplicating it. I do think for Optim.jl it will be useful to use dispatch instead of function handles. I won't get too stressed if this is incompatible with IterativeSolvers.jl, though it might be a shame.

Contributor

I agree that we should distinguish between left and right preconditioners. How about having two variables, for example, Mr and Ml doing this?

Regarding the abstraction issue: Can you give me an example when you need to multiply with the preconditioner?

cortner commented May 17, 2016

Regarding the abstraction issue: Can you give me an example when you need to multiply with the preconditioner?

Basically to compute distance, but I only need the inner products for that. So now that you pressure me to think about it, I am not sure whether it was just sloppiness on my part. I will think about it some more, but it is possible that only M \ x, <x, My>, <x, M \ y> is needed.

But don't you need M * x for right-preconditioning?

Contributor

No, you need the inverse for left- and right-preconditioning. See, e.g., Chapter 9 in Saad's book: www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf

Regarding the inner product: Think, also about when you need that inside the linear solver. So far, I haven't seen the necessity and still think that inside the methods you only need M\x.

cortner commented May 17, 2016 edited

I went through a couple of algorithms that I've used in the past. I now think most can be written (or re-written) without <x, Mx> , but I am not sure all of them. Three examples:

  • E.g., the Hager-Zhang nCG needs to compute the length of a step; see Optim.jl/src/cg.jl, line 236: (here, P = M^{-1})
        dPd = precondinvdot(s, cg.P, s)

Maybe this can be removed, but I don't immediately see how; I can turn it into a recursion, but then I still need it at least once.

  • Another example comes up in saddle-search, but very similar context: to project out some "bad directions" y_j we add \sum_{j} <x, y_j>_P^2 to the objective, but this is part of the algorithm, not part of the user input. In the same context (saddle search) you need to solve a generalised eigenvalue problem H v = lam P v and I don't see how to do that without providing <v, Pv>.
  • Finally, trust-region algorithms need to work with step-lengths, and once you hit the boundary of the trust region you also switch to a generalised eigenvalue solver which I think needs P * v again, or at least <v, Pv>. (on this last point, I'd need to go back and double-check the codes)
dpo commented May 17, 2016

There's an implementation of MINRES in Krylov.jl: https://github.com/JuliaOptimizers/Krylov.jl/blob/develop/src/minres.jl. It supports preconditioning. Only SPD preconditioning makes sense for MINRES, except in very specific cases.

zimoun commented May 17, 2016

@lruthotto I agree and I think that provide only the matrix-vector product as function is the right direction; as you are explaining and as it is implemented in KrylovMethods.jl. My point is: it is not consistent with the current behavior of IterativeSolvers.jl see e.g., gmres l.81, which uses M\x hard coded. It was the starting point of the discussion #71.
From my point of view, all the operations should be a type in the flavor of LinearOperator.jl

Since we are talking about projection methods, this means relative to an inner product, isn't it ?
(note: I am not sure, but I think <v, Mv> and/or <v, M\v> are used in some deflated methods ?)

cortner commented May 18, 2016 edited

@zimoun : why does M \ x, which is the same as `(M, x) not count as a function for you?

cortner commented May 18, 2016

@lruthotto Just to add to my examples above: it is maybe becoming clear now that for linear algebra you only need M \ x, and that any other functionality would be an extension used in other contexts. So I may just leave you in peace here and start a new issue in Optim.jl, but try and keep an eye on this.

zimoun commented May 18, 2016 edited

@cortner I am not sure to understand the question. Maybe I miss a point.
From what I know, M(x) eats a vector, does whatever, and then returns a vector.
If now M is a matrix, with nothing more defined, then M\x returns the vector solution of M*y=x (by applying LU or QR factorization).

Let consider that I am able to build the matrix P, A and I would like to solve P*A*u = P*b.

  • using KrylovMethods.jl, I define M(x) = P*x and I feed gmres.
  • using IterativeSolvers.jl, I have to play around the type system to define somehow an "object" M with the operation M\x providing by P*x, and then I feed gmres.

From my point of view, it is not consistent (and the behavior of IterativeSolvers.jl appears to me confusing, because \ generally means solve but in this context, it means apply P)

cortner commented May 18, 2016 edited

@zimoun If M is an instance of a type MyPreconditioner, storing whatever information it needs, then \\(M, x) = M \ x can be defined any way you want. So the mapping x \mapsto M \ x does just the same: eat a vector and spit out a vector.

If we think of M as approximating H, then even if you are actually storing M^{-1} you want to call the application of this M^{-1} M \ x which is widely understood to mean M^{-1} * x.

Contributor

@cortner: Thanks for starting the discussion and for your examples.

Take a look at https://github.com/lruthotto/KrylovMethods.jl/tree/devPrecond . With just adding a few lines, I was able to accommodate M as a matrix or a vector (useful for diagonal preconditioners). What do you think about these changes?

cortner commented May 18, 2016 edited

@lruthotto That is essentially what I have in mind as well. Two (three) comments:

  • I would not write in the documentation that M should be either a function that computes M\x or a matrix or vector such that M\x is computed; rather M should be of an arbitrary type as long as preconditioner(M, x) can be called, which returns M^{-1} x. (by whatever means). This doesn't prevent you from keeping the default behaviour you've already implemented for functions, arrays or vectors. To give you two concrete examples:
    • if you look at Optim.jl/cg.jl you will see that Tim Holy implemented M^{-1} x for a diagonal preconditioner by storing M_ii^{-1} rather than M_ii. By wrapping this into a type, it can then be specified which of these the vector is understood to provide. E.g., one could provide the types DiagonalPrecon and InverseDiagonalPrecon to take care of this.
    • As @zimoun pointed out, in some cases (BEM?) one actually has the inverse explicitly available, so the matrix being passed might already be the inverse. Again, by wrapping into types one can distinguish. (that said, I agree with your default behaviour!)
  • I would not choose preconditioner for the name of the method that applies it, because it sounds to me like you might be constructing the preconditioner. Personally I would actually stick with M \ x, but I appreciate that this seems to cause confusion. Maybe precondition, applyprecond, just precond, or something like that? Tim Holy called it precondfwd!. To be honest, I haven't thought of a really good name. In the end I am not too fussed about this.
  • I would probably do the default implementation in-place. A wrapper can then be provided to call whatever the user provides, e.g., precondfwd!(px, M, x) = copy!(px, precondfwd(M, x))
stevengj commented May 18, 2016 edited

For preconditioner (KSP) objects in PETSc.jl, we provide A_ldiv_B! (in-place M \ x), which corresponds to how preconditioners are defined in PETSc. Defining A_ldiv_B! for an object is sufficient to define \ as well, I believe, and it allows the solver to take advantage of in-place operations if it wants. So I would recommend this for IterativeSolvers too.

We also provide a function to do At_ldiv_B! (in-place M' \ x), but currently this is missing from Base. Actually, I just noticed that Base.LinAlg.At_ldiv_B! is defined.

@stevengj stevengj referenced this issue in JuliaParallel/PETSc.jl May 18, 2016
Open

rename KSPSolveTranspose! to Base.LinAlg.At_ldiv_B! #70

Contributor

@cortner: Thanks for your encouragement and suggestions. I just pushed some changes to the branch and switched more methods over to the new call to applyPrecond. I quite like it: It made the code not much longer and more flexible. Precondioners can now be provided as functions, arrays, and vectors.

What still remains is the option of left vs. right preconditioning in CG and other methods as well as preconditioning in minres (which @dpo has already implemented in his package)

cortner commented May 19, 2016 edited

Personally, I like applyPrecond, but I fear that it clashes with Julia naming conventions. Just applyprecond or apply_precond ok?

EDIT: Unless I hear otherwise I now think that the first bullet-point is crucial?! @stevengj: would you be willing to comment?

Although A_ldiv_B! hurts my eyes, this has some advantages:

  • it is in Base, which means that if two modules implement it then they can import from Base and don't need to know of each other. Btw, this is an issue I never quite wrapped my head around - is this discussed somewhere in depth? Can anybody comment?
  • it is widely understood what it means
  • you don't need to implement boiler-plate code for the standard operators (even though this is minimal effort)
Member

Would an Algebraic Multigrid be considered for this package?

cortner commented May 19, 2016 edited

I've written a simple wrapper for pyamg which I was hoping to integrate but if you have something better, I'd be very interest d.

P.S.: I now made it public at:
https://github.com/cortner/PyAMG.jl
(this is really just wrapping some documentation and tests around PyAMG + PyCall)

Owner
ViralBShah commented May 19, 2016 edited

PyAMG is a pretty solid solver.

Member

I think we should get a native Julia version, but PyAMG will work really well in the meantime. Good job!

cortner commented May 19, 2016 edited

Thank you - I'll be happy to register the package then.

@ChrisRackauckas ChrisRackauckas referenced this issue in JuliaLang/METADATA.jl May 19, 2016
Closed

Register PyAMG: v0.0.1 #5222

Contributor

Talking about native julia implementation of mulitgrid methods. @erantreister has started putting out a package here: https://github.com/JuliaInv/Multigrid.jl

We're still working on some details, but it'll be in good shape soon. Always open to feedback/suggestions.

cortner commented May 20, 2016 edited

I am now almost certain that we should use A_ldiv_B! or simply \ for application of the Preconditioner. If we do not, then we would have to create a Preconditioning.jl module that just defines the interface without anything else. This is a real short-coming of the module system, which I believe traits are supposed to fix? But by using A_ldiv_B! or \ we can simply overload the operators that are in Base, so there is no problem anymore.

I would really appreciate to get this confirmed (or contradicted) by somebody who has a deeper understanding of Julia than I do.

Owner

I really don't see the point in introducing a new a special linear solve function just for preconditioners. Applying the preconditioner is a linear solve and \,A_ldiv_B! are the functions for linear solves and they can be overloaded. Regarding diagonal preconditioning then we have Diagonal in Base.LinAlg for that. If some of the abstractions in Base.LinAlg are insufficient, then please raise issues.

cortner commented May 20, 2016

@andreasnoack Thank you - I agree it would be good to stick with this.

Hello all!

First thank you, I am a frequent user/abuser of this package! (I hope this is a good thread for this post)

1- I was wondering if you have any plans to support the LinearMaps.jl package as a general interface, instead of MatrixFcn which tries to accomplish the same, with less options. I have been using LinearMaps instead of MatrixFcn with IterativeSolvers, and for the most part, it works.

2- When the preconditioners are passed as a function (as in gmres for example), it is better to assume that the inverse is passed directly so Pinv*x replaces Pl\x in the code, or have the two options, maybe. For most of the things I do, the linear operators are large scale and beyond building explicitly.

3- I also cannot wait till this package is integrated with different optimization packages to replace all direct solvers. It will open many doors for large scale optimization.

Again, thanks for the developers,
Cheers!

syntapy commented Mar 23, 2017

Hey, I'd like to work on this for GSOC 2017 if yalls would be fine with that. I think I'd have fun with it. I'm not sure yet whether I'd be able to do GSOC full time since I have work obligations, but if not, I'd still love to do this part time for free

haampie commented Mar 27, 2017

Hi all,

I would like to work on this project as well for GSoC 2017. What are the current priorities? Implementing more iterative solvers? Integration with LinearMaps.jl?

Some ideas I have: implementing (eigen)solvers listed above such as Jacobi-Davidson, BiCGStab(l), IRAM (but there are already some old & open PRs on this); an maybe add some new flavours to existing methods like [thick restarted / augmented subspace / deflated] GMRES. All these latter methods are based on the idea that the Krylov subspace obtained at the end of a cycle contains very useful spectral data which can be exploited in the next cycle; potentially restoring superlinear convergence behaviour observed in full GMRES.

barche commented Mar 29, 2017

From my point of view (as a potential user) I would be interested most in preconditioners (ILU, AMG).

Hi Bart,
I have a package with some multigrid preconditioners in Multigrid.jl. In particular smoothed aggregation AMG. It's a part of the JuliaInv collection of packages.
Cheers,
Eran Treister.

barche commented Mar 31, 2017

Aha, good to know, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment