Skip to content

Latest commit

 

History

History
232 lines (150 loc) · 5.89 KB

qmm.rst

File metadata and controls

232 lines (150 loc) · 5.89 KB

API references (qmm module)

All the functionalities are provided by the unique qmm module described below.

Optimization algorithms

Three algorithms are implemented.

  1. mmcg that use the Majorize-Minimize Conjugate Gradient (MM-CG),
  2. mmmg that use the Majorize-Minimize Memory Gradient (3MG), and
  3. lcg that use the Linear Conjugate Gradient (CG) for quadratic objective QuadObjective only, with exact optimal step and conjugacy parameters.

The 3MG algorithm is usually faster but use more memory. The MM-CG can be faster and use less memory.

Majorize-Minimize Conjugate Gradient

mmcg

Majorize-Minimize Memory Gradient

mmmg

Linear Conjugate Gradient

lcg

Optimization results

The output are instance of OptimizeResult that behave like OptimizeResult of scipy. They behave like Python dictionary and are implemented to avoid dependency to scipy.

OptimizeResult

Objective classes

Objective functions are defined from the abstract class BaseObjective that have three abstract methods that must be implemented by the subclass. If users want to implements it's own objective, he is encouraged to subclass BaseObjective.

Four generic concrete classes of BaseObjective can be used. The Objective class is the more general and prefered way, and QuadObjective is a specialized subclass that allows simplification and slightly faster computation. Vmax and Vmin are for bound penalties.

Note

The property lastgv is used by algorithms to compute the objective function value at each iteration with low overhead, if the flag calc_fun is set to True (False by default). It is not required by the algorithms.

BaseObjective

Main objective

Objective

Note

The Objective class implements __call__ interface allowing objects to behave like callable (function), returning the objective value

identity = lambda x: x

objv = qmm.Objective(identity, identity, qmm.Square())
x = np.random.standard_normal((100, ))
objv(x) == objv.value(x)

Quadratic objective

This class implements specific properties or methods associated to quadratic objective function.

QuadObjective

Note

The operator argument for Objective and QuadObjective must be a callable that accept an array as input. The operator can return an array as output but can also return a list of array (for data fusion for instance). However, for needs of optimization algorithm implementation, everything must be an array internally. In case of list or arrays, all these arrays are handled by a Stacked class, internally vectorized and the data are therefore memory copied, at each iteration.

If operator returns a list of array, the adjoint must accept a list of array also. Again, everything is vectorized and Objective rebuild the list of array internally.

QuadObjective handle this list of array more efficiently since data $\omagea$ is not stored internally by the class but only μVTBω, that is an array like x.

If given, the hessp callable argument for QuadObjective must accept an array and returns an array.

Specific objective classes

Vmin

Vmax

Sum of objectives

The MixedObjective is a convenient (not required) list-like class that represent the sum of BaseObjective. Moreover, BaseObjective and MixedObjective support the "+" operator and returns a MixedObjective instance, or update the instance, respectively. Since MixedObjective is a list, it can be used with optimization algorithms<label-opt-alg>.

likelihood = QuadObjective(...)
prior1 = Objective(...)
prior2 = Objective(...)

# Equivalent to objective = MixedObjective([likelihood, prior1])
objective = likelihood + prior1

# Equivalent to objective.append(prior2)
objective = objective + prior2

# Equivalent to res = mmmg([likelihood, prior1, prior2], ...)
res = mmmg(objective, ...)

MixedObjective

Losses classes

The class Loss is an abstract base class and serve as parent class for all losses. At that time, the provided concrete loss functions are Square, Huber, Hyperbolic, HebertLeahy, GemanMcClure, and TruncSquareApprox.

Note

The Loss class implements __call__ interface allowing objects to behave like callable (function), returning the function value

u = np.linspace(-5, 5, 1000)
pot = qmm.Huber(1)
plt.plot(u, pot(u))

Loss

Square

Square

Huber

Huber

Hyperbolic or Pseudo-Huber

Hyperbolic

Hebert & Leahy

HebertLeahy

Geman & Mc Clure

GemanMcClure

Truncated Square approximation

TruncSquareApprox