Skip to content

# 1a7r0ch3/pcd-prox-split

C++ classes for implementing preconditioned proximal splitting algorithms
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
include
octave
src
README.md

## Preconditioned Proximal Splitting Algorithms

Generic C++ classes for implementing preconditioned proximal splitting algorithms.
Specialization to preconditioned generalized forward-backward or forward-Douglas–Rachford proximal splitting algorithms, on problems involving graph total variation, as explained in our articles (Raguet and Landrieu, 2015; Raguet, 2018).
Parallel implementation with OpenMP API.
MEX API for interface with GNU Octave or Matlab.

### Generic classes

The class `Pcd_prox` is the most generic, with minimalist structure for a preconditioned proximal splitting algorithm.
The class `Pfdr` specializes for the preconditioned generalized forward-backward or forward-Douglas–Rachford proximal splitting algorithms: introduces preconditioner Γ, weights W, Lipschitz metric L.
The class `Pfdr_d1` specializes `Pfdr` for the graph total variation.

### Specialization `Pfdr_d1_ql1b`: quadratic functional, ℓ1 norm, bounds, and graph total variation

Minimize functionals over a graph G = (V, E), of the form

F: x ∈ ℝV ↦ 1/2 ║y(q)Ax2 + ∑vV λv |y(ℓ1)xv| + ∑vV ι[mv, Mv](xv) + ∑(u,v) ∈ E w(u,v) |xuxv| ,

where y(q) ∈ ℝn, A: ℝn → ℝV is a linear operator, y(ℓ1) ∈ ℝV and λ ∈ ℝV and w ∈ ℝE are regularization weights, m, M ∈ ℝV are parameters and ι[a,b] is the convex indicator of [a, b] : x ↦ 0 if x ∈ [a, b], +∞ otherwise.

When y(ℓ1) is zero, the combination of ℓ1 norm and total variation is sometimes coined fused LASSO.

Currently, A must be provided as a matrix. See the documentation for special cases.

### Specialization `Pfdr_d1_lsx` : separable loss, simplex constraints, and graph total variation

K being a set of labels, minimize functionals over a graph G = (V, E), of the form

F: x ∈ ℝVKf(y, x) + ∑vV ιΔK(xv) + ∑(u,v) ∈ E w(u,v)kK λk |xu,kxv,k| ,

where y ∈ ℝVK, f is a loss functional (see below), w ∈ ℝE and λ ∈ ℝK are regularization weights, and ιΔK is the convex indicator of the simplex ΔK = {x ∈ ℝK | ∑k xk = 1 and ∀ k, xk ≥ 0}: x ↦ 0 if x ∈ ΔK, +∞ otherwise.

The following loss functionals are available.
Linear: f(y, x) = − ∑vVkK xv,k yv,k
Quadratic: f(y, x) = ∑vVkK (xv,kyv,k)2
Smoothed Kullback–Leibler divergence: f(y, x) = ∑vV KL(α u + (1 − α) yv, α u + (1 − α) xv),
where α ∈ ]0,1[, u ∈ ΔK is the uniform discrete distribution over K, and KL: (p, q) ↦ ∑kK pk log(pk/qk).

### Directory tree

``````.
├── include/    C++ headers, with some doc
├── octave/     GNU Octave or Matlab code
│   ├── doc/    some documentation
│   └── mex/    MEX API
└── src/        C++ sources
``````

### C++

The C++ classes are documented within the corresponding headers in `include/`.

### GNU Octave or Matlab

The MEX interfaces are documented within dedicated `.m` files in `mex/doc/`.
See `mex/compile_mex.m` for typical compilation commands under UNIX systems.

### References

H. Raguet and L. Landrieu, Preconditioning of a Generalized Forward-Backward Splitting and Application to Optimization on Graphs, 2015.

You can’t perform that action at this time.