Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved sampling algorithm for mcsolve #2218

Merged
merged 20 commits into from
Aug 25, 2023
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
499d44d
improved sampling algorithm for mcsolve working in initial tests
dkweiss31 Aug 21, 2023
9101b74
avoid code duplication by utilizing existing classes, taking care not…
dkweiss31 Aug 21, 2023
f034705
black
dkweiss31 Aug 21, 2023
124eabc
small bug fixes in average_states, added tests leveraging existing mc…
dkweiss31 Aug 21, 2023
5284162
docstrings, updated user guide
dkweiss31 Aug 21, 2023
1fff804
small additions to docs
dkweiss31 Aug 21, 2023
f29d6b8
added towncrier changelog
dkweiss31 Aug 21, 2023
966f495
Merge branch 'qutip:master' into mcsolve_efficienvy_v2
dkweiss31 Aug 21, 2023
8ea0c56
better parametrization of tests
dkweiss31 Aug 21, 2023
be9a6e9
Revert "black"
dkweiss31 Aug 21, 2023
3a43fd0
eliminated unnecesary function in MCSolverImprovedSampling, more cons…
dkweiss31 Aug 21, 2023
9dc74d0
added public classes to __all__, ensured compliance with pycodestyle
dkweiss31 Aug 23, 2023
41e845e
Moved improved_sampling into options, merged MCSolverImprovedSampling…
dkweiss31 Aug 24, 2023
7e181d5
updated documentation with an example plot showing usage of improved_…
dkweiss31 Aug 24, 2023
7198a45
small docs bug fix
dkweiss31 Aug 24, 2023
b38134e
Update doc/guide/dynamics/dynamics-monte.rst
dkweiss31 Aug 25, 2023
96d7ad6
Update qutip/solver/result.py
dkweiss31 Aug 25, 2023
f18e078
using decorator @property to set resultclass based on self.options["i…
dkweiss31 Aug 25, 2023
851d7e6
Merge remote-tracking branch 'origin/mcsolve_efficienvy_v2' into mcso…
dkweiss31 Aug 25, 2023
2984036
small clarity changes to docs
dkweiss31 Aug 25, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 4 additions & 1 deletion doc/biblio.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,4 +73,7 @@ Bibliography
N Khaneja et. al. *Optimal control of coupled spin dynamics: Design of NMR pulse sequences by gradient ascent algorithms.* J. Magn. Reson. **172**, 296–305 (2005). :doi:`10.1016/j.jmr.2004.11.004`

.. [Donvil22]
B. Donvil, P. Muratore-Ginanneschi, *Quantum trajectory framework for general time-local master equations*, Nat Commun **13**, 4140 (2022). :doi:`10.1038/s41467-022-31533-8`.
B. Donvil, P. Muratore-Ginanneschi, *Quantum trajectory framework for general time-local master equations*, Nat Commun **13**, 4140 (2022). :doi:`10.1038/s41467-022-31533-8`.

.. [Abd19]
M. Abdelhafez, D. I. Schuster, J. Koch, *Gradient-based optimal control of open quantum systems using quantumtrajectories and automatic differentiation*, Phys. Rev. A **99**, 052327 (2019). :doi:`10.1103/PhysRevA.99.052327`.
1 change: 1 addition & 0 deletions doc/changes/2218.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Improved sampling algorithm for mcsolve
23 changes: 23 additions & 0 deletions doc/guide/dynamics/dynamics-monte.rst
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,29 @@ Now, the Monte Carlo solver will calculate expectation values for both operators



Using the Improved Sampling Algorithm
-------------------------------------

Oftentimes, quantum jumps are rare. This is especially true in the context of simulating gates
for quantum information purposes, where typical gate times are orders of magnitude smaller than
typical timescales for decoherence. In this case, using the standard monte-carlo sampling algorithm,
we often repeatedly sample the no-jump trajectory. We can thus reduce the number of required runs
by only sampling the no-jump trajectory once. We then extract the no-jump probability :math:`p`,
and for all future runs we only sample random numbers :math:`r_1` where :math:`r_1>p`, thus ensuring
that a jump will occur. When it comes time to compute expectation values, we weight the no-jump
trajectory by :math:`p` and the jump trajectories by :math:`1-p`. This algorithm is described
in [Abd19]_. This algorithm can be utilized by setting the flag ``improved_sampling=True`` in the call to
``mcsolve``:

.. plot::
:context: close-figs

data = mcsolve(H, psi0, times, [np.sqrt(0.1) * a], e_ops=[a.dag() * a, sm.dag() * sm], improved_sampling=True)

where in this case the first run samples the no-jump trajectory, and the remaining 499 trajectories are all
guaranteed to include (at least) one jump.


.. _monte-reuse:

Reusing Hamiltonian Data
Expand Down
165 changes: 116 additions & 49 deletions qutip/solver/mcsolve.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,30 @@
__all__ = ['mcsolve', "MCSolver"]
__all__ = ["mcsolve", "MCSolver"]

import numpy as np
from ..core import QobjEvo, spre, spost, Qobj, unstack_columns
from .multitraj import MultiTrajSolver
from .multitraj import MultiTrajSolver, MultiTrajSolverImprovedSampling
from .solver_base import Solver, Integrator
from .result import McResult, McTrajectoryResult
from .result import McResult, McTrajectoryResult, McResultImprovedSampling
from .mesolve import mesolve, MESolver
import qutip.core.data as _data
from time import time


def mcsolve(H, state, tlist, c_ops=(), e_ops=None, ntraj=500, *,
args=None, options=None, seeds=None, target_tol=None, timeout=None):
def mcsolve(
H,
state,
tlist,
c_ops=(),
e_ops=None,
ntraj=500,
*,
args=None,
options=None,
seeds=None,
target_tol=None,
timeout=None,
improved_sampling=False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this should be in options. options include modification to the algorithm that do not change the physic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right so I guess the question is: do we make a new function mcissolve or add this as a flag to options. Maybe I'd be more inclined for a new function, but I don't have a strong preference

):
r"""
Monte Carlo evolution of a state vector :math:`|\psi \rangle` for a
given Hamiltonian and sets of collapse operators. Options for the
Expand Down Expand Up @@ -116,6 +129,10 @@ def mcsolve(H, state, tlist, c_ops=(), e_ops=None, ntraj=500, *,
Maximum time for the evolution in second. When reached, no more
trajectories will be computed.

improved_sampling: Bool
Whether to use the improved sampling algorithm from Abdelhafez et al.
PRA (2019)

Returns
-------
results : :class:`qutip.solver.McResult`
Expand All @@ -136,29 +153,36 @@ def mcsolve(H, state, tlist, c_ops=(), e_ops=None, ntraj=500, *,
if options is None:
options = {}
options = {
key: options[key]
for key in options
if key in MESolver.solver_options
key: options[key] for key in options if key in MESolver.solver_options
}
return mesolve(H, state, tlist, e_ops=e_ops, args=args,
options=options)
return mesolve(H, state, tlist, e_ops=e_ops, args=args, options=options)

if isinstance(ntraj, (list, tuple)):
raise TypeError(
"ntraj must be an integer. "
"A list of numbers is not longer supported."
"ntraj must be an integer. " "A list of numbers is not longer supported."
)

mc = MCSolver(H, c_ops, options=options)
result = mc.run(state, tlist=tlist, ntraj=ntraj, e_ops=e_ops,
seed=seeds, target_tol=target_tol, timeout=timeout)
if not improved_sampling:
mc = MCSolver(H, c_ops, options=options)
else:
mc = MCSolverImprovedSampling(H, c_ops, options=options)

result = mc.run(
state,
tlist=tlist,
ntraj=ntraj,
e_ops=e_ops,
seed=seeds,
target_tol=target_tol,
timeout=timeout,
)
return result


class MCIntegrator:
"""
Integrator like object for mcsolve trajectory.
"""

name = "mcsolve"

def __init__(self, integrator, c_ops, n_ops, options=None):
Expand All @@ -171,7 +195,7 @@ def __init__(self, integrator, c_ops, n_ops, options=None):
self._is_set = False
self.issuper = c_ops[0].issuper

def set_state(self, t, state0, generator):
def set_state(self, t, state0, generator, no_jump=False, jump_prob_floor=0.0):
"""
Set the state of the ODE solver.

Expand All @@ -185,10 +209,23 @@ def set_state(self, t, state0, generator):

generator : numpy.random.generator
Random number generator.

no_jump: Bool
whether or not to sample the no-jump trajectory. If so, the "random number"
should be set to zero

jump_prob_floor: float
if no_jump == False, this is set to the no-jump probability. This setting
ensures that we sample a trajectory with jumps
"""
self.collapses = []
self._generator = generator
self.target_norm = self._generator.random()
if no_jump:
self.target_norm = 0.0
else:
self.target_norm = (
self._generator.random() * (1 - jump_prob_floor) + jump_prob_floor
)
self._integrator.set_state(t, state0)
self._is_set = True

Expand All @@ -202,11 +239,10 @@ def integrate(self, t, copy=False):
t_step, state = self._integrator.mcstep(t, copy=False)
norm = self._prob_func(state)
if norm <= self.target_norm:
t_col, state = self._find_collapse_time(norm_old, norm,
t_old, t_step)
t_col, state = self._find_collapse_time(norm_old, norm, t_old, t_step)
self._do_collapse(t_col, state)
t_old, y_old = self._integrator.get_state(copy=False)
norm_old = 1.
norm_old = 1.0
else:
t_old, y_old = t_step, state
norm_old = norm
Expand All @@ -223,7 +259,7 @@ def reset(self, hard=False):
def _prob_func(self, state):
if self.issuper:
return _data.trace_oper_ket(state).real
return _data.norm.l2(state)**2
return _data.norm.l2(state) ** 2

def _norm_func(self, state):
if self.issuper:
Expand All @@ -233,28 +269,27 @@ def _norm_func(self, state):
def _find_collapse_time(self, norm_old, norm, t_prev, t_final):
"""Find the time of the collapse and state just before it."""
tries = 0
while tries < self.options['norm_steps']:
while tries < self.options["norm_steps"]:
tries += 1
if (t_final - t_prev) < self.options['norm_t_tol']:
if (t_final - t_prev) < self.options["norm_t_tol"]:
t_guess = t_final
_, state = self._integrator.get_state()
break
t_guess = (
t_prev
+ ((t_final - t_prev)
* np.log(norm_old / self.target_norm)
/ np.log(norm_old / norm))
t_guess = t_prev + (
(t_final - t_prev)
* np.log(norm_old / self.target_norm)
/ np.log(norm_old / norm)
)
if (t_guess - t_prev) < self.options['norm_t_tol']:
t_guess = t_prev + self.options['norm_t_tol']
if (t_guess - t_prev) < self.options["norm_t_tol"]:
t_guess = t_prev + self.options["norm_t_tol"]
_, state = self._integrator.mcstep(t_guess, copy=False)
norm2_guess = self._prob_func(state)
if (
np.abs(self.target_norm - norm2_guess) <
self.options['norm_tol'] * self.target_norm
np.abs(self.target_norm - norm2_guess)
< self.options["norm_tol"] * self.target_norm
):
break
elif (norm2_guess < self.target_norm):
elif norm2_guess < self.target_norm:
# t_guess is still > t_jump
t_final = t_guess
norm = norm2_guess
Expand All @@ -263,11 +298,12 @@ def _find_collapse_time(self, norm_old, norm, t_prev, t_final):
t_prev = t_guess
norm_old = norm2_guess

if tries >= self.options['norm_steps']:
if tries >= self.options["norm_steps"]:
raise RuntimeError(
"Could not find the collapse time within desired tolerance. "
"Increase accuracy of the ODE solver or lower the tolerance "
"with the options 'norm_steps', 'norm_tol', 'norm_t_tol'.")
"with the options 'norm_steps', 'norm_tol', 'norm_t_tol'."
)

return t_guess, state

Expand All @@ -287,17 +323,19 @@ def _do_collapse(self, collapse_time, state):
for i, n_op in enumerate(self._n_ops):
probs[i] = n_op.expect_data(collapse_time, state).real
probs = np.cumsum(probs)
which = np.searchsorted(probs,
probs[-1] * self._generator.random())
which = np.searchsorted(probs, probs[-1] * self._generator.random())

state_new = self._c_ops[which].matmul_data(collapse_time, state)
new_norm = self._norm_func(state_new)
if new_norm < self.options['mc_corr_eps']:
if new_norm < self.options["mc_corr_eps"]:
# This happen when the collapse is caused by numerical error
state_new = _data.mul(state, 1 / self._norm_func(state))
else:
state_new = _data.mul(state_new, 1 / new_norm)
self.collapses.append((collapse_time, which))
# this does not need to be modified for improved sampling:
# as noted in Abdelhafez PRA (2019),
# after a jump we reset to the full range [0, 1)
self.target_norm = self._generator.random()
self._integrator.set_state(collapse_time, state_new)

Expand Down Expand Up @@ -392,21 +430,22 @@ def _restore_state(self, data, *, copy=True):
"""
Retore the Qobj state from its data.
"""
if self._state_metadata['dims'] == self.rhs.dims[1]:
state = Qobj(unstack_columns(data),
**self._state_metadata, copy=False)
if self._state_metadata["dims"] == self.rhs.dims[1]:
state = Qobj(unstack_columns(data), **self._state_metadata, copy=False)
else:
state = Qobj(data, **self._state_metadata, copy=copy)

return state

def _initialize_stats(self):
stats = super()._initialize_stats()
stats.update({
"method": self.options["method"],
"solver": "Master Equation Evolution",
"num_collapse": self._num_collapse,
})
stats.update(
{
"method": self.options["method"],
"solver": "Master Equation Evolution",
"num_collapse": self._num_collapse,
}
)
return stats

def _argument(self, args):
Expand All @@ -417,15 +456,19 @@ def _argument(self, args):
for n_op in self._n_ops:
n_op.arguments(args)

def _run_one_traj(self, seed, state, tlist, e_ops):
def _run_one_traj(
self, seed, state, tlist, e_ops, no_jump=False, jump_prob_floor=0.0
):
"""
Run one trajectory and return the result.
"""
# The integrators is reused, but non-reentrant. They are are fine for
# multiprocessing, but will fail with multithreading.
# If a thread base parallel map is created, eahc trajectory should use
# a copy of the integrator.
seed, result = super()._run_one_traj(seed, state, tlist, e_ops)
seed, result = super()._run_one_traj(
seed, state, tlist, e_ops, no_jump=no_jump, jump_prob_floor=jump_prob_floor
)
result.collapse = self._integrator.collapses
return seed, result

Expand Down Expand Up @@ -518,3 +561,27 @@ def avail_integrators(cls):
**Solver.avail_integrators(),
**cls._avail_integrators,
}


class MCSolverImprovedSampling(MultiTrajSolverImprovedSampling, MCSolver):
r"""
Monte Carlo Solver of a state vector :math:`|\psi \rangle` for a
given Hamiltonian and sets of collapse operators using the improved
sampling algorithm. See MCSolver for further details and documentation
"""
name = "mcsolve improved sampling"
resultclass = McResultImprovedSampling
trajectory_resultclass = McTrajectoryResult
mc_integrator_class = MCIntegrator

def __init__(self, H, c_ops, *, options=None):
MCSolver.__init__(self, H, c_ops, options=options)

def _run_one_traj(
self, seed, state, tlist, e_ops, no_jump=False, jump_prob_floor=0.0
):
seed, result = super()._run_one_traj(
seed, state, tlist, e_ops, no_jump=no_jump, jump_prob_floor=jump_prob_floor
)
result.collapse = self._integrator.collapses
return seed, result