Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Magnus and other linear operator solvers #90

Closed
ChrisRackauckas opened this issue Aug 1, 2017 · 6 comments
Closed

Magnus and other linear operator solvers #90

ChrisRackauckas opened this issue Aug 1, 2017 · 6 comments

Comments

@ChrisRackauckas
Copy link
Member

It seems many physicists have problems of the form u' = A*u and u' = A(t)*u. This A can be from a PDE discretization, or u' = H(t)*u naturally shows up from Schrodinger's equation. In these cases, we need to exploit the linearity of the problem in order to do well.

This can all use the DiffEqOperator path where a user defines a DiffEqOperator as their function. This contains an update_coefficients!(A,t,u) function as well for handling time and state dependence. Then there are different things to tackle here. For one, we can make the stiff solvers "linear-aware". This can be done by, whenever we would be solving a nonlinear equation using a Newton method, instead do an

if typeof(f) <: DiffEqOperator
  # Linear Solve
else
  # Nonlinear Solve
end

In the #Linear Solve part, it will just call update_coefficients!(A,t,u) and then \, so whatever overrides the user gives for how the matrix updates in time and whatever override for how the solving is supposed to take place will be used. (@shivin9 this is why the interface matching is important here)

This will make things like Trapezoid automatically be CrankNicholson, and will be very useful with things like BDF methods and ApproxFun. In addition, we can pull some inspiration from @jagot for Magnus integrators.

http://iopscience.iop.org/article/10.1088/1367-2630/14/10/105008

Using the split operators interface we can also do things on A + B(t) both linear operators. But I don't know of algorithms directly on that, but technically the definition already exists.

The DiffEqOperator will have a matrix-free pretty soon which closely matches that of LinearMaps.jl, but the DiffEqOperator interface defines expm and expmv on the operator itself:

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/diffeq_operator.jl#L27

This means that the matrix-free versions will allow an optional method for directly defining expmv (and the array versions will get a quick upgrade to allow for direct definitions of expm and expmv if someone has a use for it). This solves @jagot's problem since he wanted to use things like

expa2

to speed up the integrator by putting that definition in manually, and that interface would allow doing so.

DiffEqApproxFun.jl will get a special AbstractDiffEqOperator defined from ApproxFuns, which will define a linear operator on Fun types via ApproxFun operators. Even nonlinear ApproxFun operators can take this route because then it'll make it use the specialized ApproxFun Newton solver. What's interesting here is that DiffEq will be calling \ on the operator, so we can specialize the operator to hold the boundary conditions as well to do things like chop(Operator\[bcs;u],tol). With the type check above, this will convert the nonlinear parts on ApproxFuns to be just like SpectralTimeStepping.jl which is exactly what we needed @dlfivefifty.

As for making it easy to solve finite difference PDEs, we have a library we are building which will automatically generate the linear operators from arbitrary order and derivative order PDE discretizations via Fornburg's algorithm:

https://github.com/JuliaDiffEq/PDEOperators.jl

This is all almost compatible with the full interface and will include things like time dependent BCs to make really good test cases for u' = A(t)*u.

Things I don't know about: @jagot

This is maybe more exotic, but it could potentially be useful for me to couple a numerical solution in the interior with analytic asymptotic solutions. That can presumably be done in some custom callback at every timestep.

What do you mean by this?

@ChrisRackauckas
Copy link
Member Author

For those curious, we already have partial linear methods implemented:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/test/linear_nonlinear_convergence_tests.jl#L20

so actually the infrastructure for doing all of this is already in there. Just the specific perform_step! methods for specific algorithms need updates to handle AbstractDiffEqOperators.

@dlfivefifty
Copy link

Does this mean we need to implement expmv in ApproxFun?

@jagot
Copy link
Contributor

jagot commented Aug 1, 2017

Hi,

Looks very nice, indeed. A couple of things:

  • For operator splitting, it is important that the exponentiations are placed symmetrically, like so:
    expab
    When time-stepping, this will then give a rate of convergence of dt^2. This is intuitively extended to multiple operators. Maybe this is already that way?

  • Post perform_step, every cycle:

    • I (usually) need to calculate a couple of observables, on the form
      observable
      where A is some operator and * denote complex conjugate, e.g. for calculating the norm of the wavefunction, A=1. This, I think overlaps with the time series functionality.
    • To avoid unphysical reflections at the boundary, one usually tries to absorb whatever flux reaches the end of the box. In computational electrodynamics (usually FDTD), they use a perfectly matched layer, which absorbs one wavelength perfectly. Not really applicable in my case; instead we use a soft mask function at the edge or complex-rotate the radial coordinate, complex-rotation, such that outgoing waves are exponentially damped.
    • Other technicalities, but I don't think any of this will be a problem.
  • Multiple RHS for Crank--Nicolson (or other schemes): Usually, the wavefunction is written as a linear combination of angular basis vectors (spherical harmonics)
    wfn
    and the Hamiltonian will then also be partitioned according to spherical symmetry:
    ham
    For a second-order finite-differences discretization of the kinetic operator, -laplace/2, this is a tridiagonal matrix.
    It turns out that ham-sym, which we can exploit by

    • Not saving more than one copy of the (sub-)Hamiltonian
    • View the different m:s as column indices in a matrix, and perform the Crank--Nicolson propagation as
      1. tridiagonal matrix--matrix product
      2. tridiagonal matrix--vector solve (?gtsv), with multiple right-hand sides (if an optimized implementation exists)

    It would be silly for me not to use this obvious fact, but I am unsure how easy this is to accomplish (edit: in a general framework, I mean).

  • Regarding my unclear comments on analytical asymptotics: If sufficiently far from the 'interaction region' (i.e. outside a certain radius) instead of propagating waves numerically, one can map the solution onto different plane waves exp(ik.r) which are trivial to propagate. I think this can be implemented as a custom step, along with calculating observables, etc.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Aug 1, 2017

Does this mean we need to implement expmv in ApproxFun?

I'm developing the exponential integrators to default to expm but have a choice for expmv. From this:

https://github.com/JuliaApproximation/SpectralTimeStepping.jl/blob/master/src/timeevolution.jl#L55

it looks like a dense exponential is fine because the operator is only NxN where N is the number of coefficients, right?

(the expmv variant will take a bit of time though, mostly because we are missing the relevant pieces to do this on matrices efficiently in the Julia ecosystem, at least beyond Phi0)

@ChrisRackauckas
Copy link
Member Author

For operator splitting, it is important that the exponentiations are placed symmetrically

Yes, methods which do splitting have this stuff as part of the method. Essentially the splitting stuff will be like ODEProblem((A,B),u0,tspan) where I am given the operators from the user where they determine the splits, and the methods will then be defined appropriately symmetrically (all of the algorithms I've found use that Baker–Campbell–Hausdorff result.

Post perform_step, every cycle:

Those look like they can be handled by the callbacks. You'll need to save to observables to your own array though, but it wouldn't be difficult. That may be tutorial-worthy though.

For a second-order finite-differences discretization of the kinetic operator, -laplace/2, this is a tridiagonal matrix.
Not saving more than one copy of the (sub-)Hamiltonian

Or save zero copies with a matrix-free version. Or alias the underlying matrices. I think this could be something left to the user.

tridiagonal matrix--matrix product
tridiagonal matrix--vector solve (?gtsv), with multiple right-hand sides (if an optimized implementation exists)

Julia will actually do this automatically if the operator is set as Tridiagonal. The lufact! will specialize on Tridiagonal matrices to do a faster factorization. Then after that's created, you can repeatedly \ the multiple RHS. Or you can use a block Krylov method which IterativeSolvers.jl is working on.

This means I should probably make sure that the DiffEqArrayOperator works with special matrix types.

@ChrisRackauckas
Copy link
Member Author

Implemented in #1061 and continued in #1064

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants