{{ message }}

Morgan A. Schmitz., Matthieu Heitz, Nicolas Bonneel, Fred Ngole, David Coeurjolly, Marco Cuturi, Gabriel Peyré, and Jean-Luc Starck. "Wasserstein dictionary learning: Optimal transport-based unsupervised nonlinear dictionary learning." SIAM Journal on Imaging Sciences, 2018

# matthieuheitz/WassersteinDictionaryLearning

Switch branches/tags
Nothing to show

## Files

Failed to load latest commit information.
Type
Name
Commit time

# Wasserstein Dictionary Learning [Schmitz et al. 2018]

This repository contains the code for the following publication. Please credit this reference if you use it.

@article{schmitz_wasserstein_2018,
title = {Wasserstein {Dictionary} {Learning}: {Optimal} {Transport}-based unsupervised non-linear dictionary learning},
shorttitle = {Wasserstein {Dictionary} {Learning}},
url = {https://hal.archives-ouvertes.fr/hal-01717943},
journal = {SIAM Journal on Imaging Sciences},
author = {Schmitz, Morgan A and Heitz, Matthieu and Bonneel, Nicolas and Ngolè Mboula, Fred Maurice and Coeurjolly, David and Cuturi, Marco and Peyré, Gabriel and Starck, Jean-Luc},
year = {2018},
keywords = {Dictionary Learning, Optimal Transport, Wasserstein barycenter},
}


The full text is available on HAL and arXiv.

### Configure, build and run

There is a CMakeLists.txt for the project, so you can just create a build directory outside the source.

$mkdir build$ cd build

##### Warm restart

The idea is that instead of a single L-BFGS run of 500 iterations, you restart a fresh L-BFGS every 10 iterations, and initialize the scaling vectors as the ones obtained at the end of the previous run. As explained in the paper, this technique accumulates the Sinkhorn iterations as we accumulate L-BFGS runs, so it allows to compute less Sinkhorn iterations for equivalent or better results, which leads to significant speed-ups. Be aware that accumulating too much Sinkhorn iterations can lead to numerical instabilities. If that happens, you can use the log-domain stabilization, which is slower, but compensated by the speed-up of the warm restart. For more details, please refer to our paper. The value of 10 optimization iterations per L-BFGS run is arbitrary and can be changed in the code (in regress_both() of inverseWasserstein.h), but it has shown good results for our experiments.

This program is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with this program. If not, see http://www.gnu.org/licenses/.

### Contact

matthieu.heitz@univ-lyon1.fr

Morgan A. Schmitz., Matthieu Heitz, Nicolas Bonneel, Fred Ngole, David Coeurjolly, Marco Cuturi, Gabriel Peyré, and Jean-Luc Starck. "Wasserstein dictionary learning: Optimal transport-based unsupervised nonlinear dictionary learning." SIAM Journal on Imaging Sciences, 2018

## Releases

No releases published

## Packages 0

No packages published