Skip to content

Commit

Permalink
[MRG] ENH: Optional positivity constraints on the dictionary and spar…
Browse files Browse the repository at this point in the history
…se code (#6374)

* ENH: Add positivity option for code and dictionary

Provides an option for dictionary learning to positively constrain the
dictionary and the sparse code. This is useful in applications of
dictionary learning where the data is know to be positive (e.g. images),
but the sparsity constraint that dictionary learning has is better
suited for factorizing the data in contrast to other positively
constrained factorization techniques like NMF, which may not be
similarly sparse.

* TST: Test positivity with code and dictionary

Ensure that when the positivity constraint is applied that the
dictionary and code end up having only positive values in the respective
results depending on whether dictionary and/or code are positively
constrained.

* DOC: Positivity constraints dictionary learning

Shows the various positivity constraints on dictionary learning and what
the results of these look like using a Red to Blue color map. These are
included in the examples and also in the docs below dictionary learning.
All of these use the Olivetti faces as a training set.
  • Loading branch information
jakirkham authored and ogrisel committed Jun 21, 2018
1 parent 5718466 commit 6ce497c
Show file tree
Hide file tree
Showing 4 changed files with 284 additions and 28 deletions.
26 changes: 26 additions & 0 deletions doc/modules/decomposition.rst
Expand Up @@ -451,6 +451,32 @@ After using such a procedure to fit the dictionary, the transform is simply a
sparse coding step that shares the same implementation with all dictionary
learning objects (see :ref:`SparseCoder`).

It is also possible to constrain the dictionary and/or code to be positive to
match constraints that may be present in the data. Below are the faces with
different positivity constraints applied. Red indicates negative values, blue
indicates positive values, and white represents zeros.


.. |dict_img_pos1| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_011.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%

.. |dict_img_pos2| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_012.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%

.. |dict_img_pos3| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_013.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%

.. |dict_img_pos4| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_014.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%

.. centered:: |dict_img_pos1| |dict_img_pos2|
.. centered:: |dict_img_pos3| |dict_img_pos4|


The following image shows how a dictionary learned from 4x4 pixel image patches
extracted from part of the image of a raccoon face looks like.

Expand Down
57 changes: 55 additions & 2 deletions examples/decomposition/plot_faces_decomposition.py
Expand Up @@ -48,13 +48,13 @@
print("Dataset consists of %d faces" % n_samples)


def plot_gallery(title, images, n_col=n_col, n_row=n_row):
def plot_gallery(title, images, n_col=n_col, n_row=n_row, cmap=plt.cm.gray):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
plt.imshow(comp.reshape(image_shape), cmap=cmap,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
Expand Down Expand Up @@ -137,3 +137,56 @@ def plot_gallery(title, images, n_col=n_col, n_row=n_row):
components_[:n_components])

plt.show()

# #############################################################################
# Various positivity constraints applied to dictionary learning.
estimators = [
('Dictionary learning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Dictionary learning - positive dictionary',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng,
positive_dict=True),
True),
('Dictionary learning - positive code',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng,
positive_code=True),
True),
('Dictionary learning - positive dictionary & code',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng,
positive_dict=True,
positive_code=True),
True),
]


# #############################################################################
# Plot a sample of the input data

plot_gallery("First centered Olivetti faces", faces_centered[:n_components],
cmap=plt.cm.RdBu)

# #############################################################################
# Do the estimation and plot it

for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time() - t0)
print("done in %0.3fs" % train_time)
components_ = estimator.components_
plot_gallery(name, components_[:n_components], cmap=plt.cm.RdBu)

plt.show()

0 comments on commit 6ce497c

Please sign in to comment.