SparseMAP: differentiable sparse structure inference
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cpp
python
src
.gitignore
LICENSE
README.md
sparsemap.png

README.md

SparseMAP: Differentiable Sparse Structure Inference

SparseMAP cartoon

SparseMAP is a new method for sparse structured inference, able to automatically select only a few global structures: it is situated between MAP inference, which picks a single structure, and marginal inference, which assigns probability mass to all structures, including implausible ones.

SparseMAP is differentiable and can work with any structure for which a MAP oracle is available.

More info in our paper,

SparseMAP: Differentiable Sparse Structured Inference. Vlad Niculae, Andre F.T. Martins, Mathieu Blondel, Claire Cardie. In: Proc. of ICML, 2018.

SparseMAP may be used to dynamically infer the computation graph structure, marginalizing over a sparse distribution over all possible structures. Navigate to the cpp folder for an implementation, and see our paper,

Towards Dynamic Computation Graphs via Sparse Latent Structure. Vlad Niculae, André F.T. Martins, Claire Cardie. In: Proc. of EMNLP, 2018.

Current state of the codebase

We are working to slowly provide useful implementations. At the moment, the codebase provides a generic pytorch layer supporting version 0.2, as well as particular instantiations for sequence, matching, and tree layers.

Dynet custom layers, as well as the SparseMAP loss, are on the way.

Python Setup

Requirements: numpy, scipy, Cython, pytorch=0.2, and ad3 >= 2.2

  1. Set the AD3_DIR environment variable to point to the AD3 source directory.

  2. Inside the python dir, run python setup.py build_ext --inplace.

Notes on testing

The implemented layers pass numerical tests. However, the pytorch gradcheck (as of version 0.2) has a very strict "reentrant" test, which we fail due to tiny numerical differences. To reliably check gradients, please comment out the if not reentrant: ... part of pytorch's gradcheck.py.

Dynet (c++) setup:

See the instructions in the cpp folder.