Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Steady State solver #252

Closed
PhilipVinc opened this issue Jul 11, 2019 · 6 comments
Closed

Steady State solver #252

PhilipVinc opened this issue Jul 11, 2019 · 6 comments

Comments

@PhilipVinc
Copy link
Contributor

PhilipVinc commented Jul 11, 2019

Hi,

I just wanted to bring you guys to the attention a small package authored by a colleague of mine to compute the steady state of an lindblad markovian system. It basically solves the system L\rho = 0 by also enforcing the trace normalization.

It's a very efficient method, allowing for finding the steady state of quite big systems.
In our group we've been using the method for quite some time and it works very well. Maybe you'd be interested in getting in touch to integrate that in QuantumOptics...

@david-pl
Copy link
Member

Thanks for pointing to that package, it looks really good. As the steady-state solvers in QO.jl either solve the ODE or diagonalize the Liouvillian there is definitely room for improvement.

From our/my side this would certainly be a good addition to the steadystate module. As SteadyState.jl is still a WIP, though, it might eventually go beyond the scope of a sub-module, in which case it might be better to keep it as a separate extension. What do you think? Do you know if there are plans for further extending SteadyState.jl?

@Z-Denis
Copy link

Z-Denis commented Sep 24, 2019

Hi,

I am the author of the above-mentioned SteadyState.jl.

It was first written in a generic way so as to be able to handle operators provided as any kind of matrix, in the perspective of taking advantage of the performance of running such iterative solvers in GPUs. Yet I have very recently refactored its code in such a way that its signatures are somehow consistent with those of the solvers within QuantumOptics.steadystate. Strictly speaking, the solver itself is only 30-line long, the rest being almost a copy/paste of code from src/master.jl. Only lazy operators currently pose problems as they are not really reducible to matrices.

Such iterative methods usually converge within a few seconds for non-pathological systems of a few hundred states and prove especially more efficient than brute-force integration when a very low tolerance is required.

As a benchmark of the method, I consider the case of the periodic driven-dissipative transverse-field Ising model with N spins at the crossover between an ordered phase and a disordered phase, as given by:

using QuantumOptics, SteadyState, IterativeSolvers

function transverse_ising(N)
    γ = 1.0
    V = 2γ
    g = 1.3γ # ~ at the crossover

    lbasis = SpinBasis(1//2)
    gbasis = CompositeBasis([lbasis for i in 1:N])

    sm = sigmam(lbasis)
    sx = sigmax(lbasis)
    sy = sigmay(lbasis)
    sz = sigmaz(lbasis)

    H = SparseOperator(gbasis)
    for i in 1:N
        H += embed(gbasis, [i, mod1(i+1,N)] ,[sz,sz])
        H += embed(gbasis, i, g/2. * sx)
    end

    J = sqrt(γ) .* [embed(gbasis, i, sm) for i in 1:N]

    return H, J
end

H, J = transverse_ising(10); # Hilbert space of dimension 1024
dH = DenseOperator(H);
dJ = DenseOperator.(J);

From this one measures (on 6 CPU threads):

julia> @time SteadyState.iterative(dH, dJ; tol=1e-3); # Here bicgstabl! is used by default
398.616856 seconds (3.90 k allocations: 7.127 GiB, 0.27% gc time)
julia> @time SteadyState.iterative(dH, dJ, idrs!; tol=1e-3);
411.671667 seconds (7.59 k allocations: 738.624 MiB, 0.12% gc time)
julia> @time SteadyState.iterative(dH, dJ, idrs!; tol=1e-6);
658.327444 seconds (11.72 k allocations: 740.137 MiB, 0.05% gc time)

This works with CuArrays too. For N=11 spins (2048 states) and greater precision, we get:

julia> using CuArrays, BenchmarkTools
julia> H, J = transverse_ising(11); # Hilbert space of dimension 2048
julia> dH = DenseOperator(H);
julia> dJ = DenseOperator.(J);
julia> @btime SteadyState.iterative(cu(dH.data),map(x->cu(x.data),dJ); tol=1e-8) # defaults to idrs!
  80.000 s (426821 allocations: 796.23 MiB)

Achieving this precision by brute-force integration is rather hopeless, even for very small chains, due to the very slow relaxation of the system for these parameters.

If you think this would be a useful addition to QuantumOptics.steadystate, I would be very glad to contribute.

@PhilipVinc
Copy link
Contributor Author

@david-pl As a side-note: this method, is crazy fast compared to steadystate.master or steady state.eigen. It's probably more similar to an eigenstate solver in that it targets directly the lowest lying eigenstate.

Even for small systems, you get a considerable (>x10) speedup, and as a bonus you can scale scale up to a rather large number of sites.

@david-pl
Copy link
Member

Hi @Z-Denis,

Thanks for your comment. I have to admit I'm very impressed by the benchmarks. As @PhilipVinc says, even when I run your code for N=2 the steady-state solvers currently implemented hardly compare to your method. So from my side this would be great to have available in QuantumOptics.steadystate.

One thing we could discuss is switching from the BLAS.gemm! methods to the gemm! methods implemented in QuantumOptics. On the one hand this would have the following two advantages: 1) we have a gemm! method for sparse-dense multiplication, which would allow to keep H and J sparse without having to use the fallback method. 2) This would also take care of lazy operators since there are also gemm! methods for them.
On the other hand, this doesn't work with CuArrays, which we definitely don't want. We could write separate methods for this, though.
Do you think this is worth looking into?

So bottom line: If you're willing to contribute this, I would be happy to add it to QuantumOptics. Would you like to create a PR? I'm also happy to have further discussions!

@Z-Denis
Copy link

Z-Denis commented Sep 25, 2019

After discussing with @PhilipVinc we have a few thoughts on this:

  • SteadyState.jl depends on IterativeSolvers and LinearMaps.jl to work. Integrating this method in QuantumOptics.jl entails adding those two dependencies. Is this ok for you?
  • SteadyState.jl as it stands now is generic in that it can support any kind of AbstractArray. It would be a shame to loose this genericity.
  • Right now, in some internal functions (analogous to timeevoution.dmaster_h), I keep in memory only the non hermitian hamiltonian iHnh and the vector of jump operators J, not their hermitian conjugates Hnh_dagger and Jdagger(unless explicitly passed to the method, e.g. for a Bloch-Redfield master equation). In your code, you keep in memory both of them. I would argue that for the Lindblad case this is not needed. While not a problem for small systems, when working with very big lattices with dissipation occurring at each site, or on the GPU, this is absolutely detrimental, because memory becomes the bottleneck. I guess that if QuantumOptics.jl wants to stay performant for very complex calculations, this would need to be addressed. If this is done, we could call internally directly timeevoution.dmaster_h, which would make the method compatible with operators that have no array representation (lazy, fft...) as well.

We believe that there are two possible ways to implement this in QuantumOptics.jl:

  • We can integrate the iterative method by more or less copy-pasting it into QuantumOptics.steadystate (with a few amendments so that it works with any Operator). I keep my version of SteadyState.jl so that people who want to use it with generic Array types still can. Then, if one day QuantumOptics.jl supports some type GPUDenseOperator, it should work out of the box.
  • We can make SteadyState.jl not to depend on QuantumOptics.jl, instead QuantumOptics.jl to depends on SteadyState.jl, and implement methods like residual!for AbstractOperator (which are more or less the same as timeevolution.dmaster_h) and dispatch should take care of everything. If this route is considered, we could also consider moving the generic SteadyState.jl package to the QO organization (with a different name maybe to avoid confusion with QuantumOptics.steadystate) so that users who want to use the underlying idea with bare arrays still be able to.

@david-pl
Copy link
Member

  • Sure, that would be okay.

  • No argument there. I did not mean to suggest to remove the generic method for matrices. I just meant that one could probably specialize on the methods for sparse and lazy operators.

  • Good point. The thing is that when solving time evolutions, the actual limiting factor in storage will usually be rho which is a dense array. So when working with sparse or lazy H and J, storing an extra copy of those is not so bad. Of course what you say is still true then: we're wasting some memory here. However, we don't have gemm! methods for complex conjugation/transposition implemented. So doing this in general would require implementing one such method for sparse arrays and each lazy type. I'm not sure how much work that is, but I'll have a look.

Regarding your suggestions on how to proceed: I would be fine either way, but I'm slightly more inclined to go for the first option. The other option depends a bit on what your plans are with SteadyState.jl. If you plan on making a lot of changes, it might not be the best idea to make QuantumOptics depend on SteadyState. But it's really your choice, so just let me know how you want to go about it.
In any case, I will clearly point to your package on QuantumOptics' front page so people looking for efficient steady-state solvers for master equations have an easy time finding it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants