Sparse RNNs -- Learning Connections and Hidden Sizes (TensorFlow)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
bidaf Update README.md Jul 9, 2018
ptb Update README.md Apr 1, 2018
rhns starting from rhn width 1000 Dec 28, 2017
.gitignore gitignore and heterogeneous lstms Sep 8, 2017
LICENSE Initial commit Sep 7, 2017
Poster_Wen_ICLR2018.pdf Add files via upload May 6, 2018
README.md Update README.md Jul 3, 2018

README.md

To duplicate, please use the exact tensorflow versions as mentioned.

About

This is TensorFlow implementation for training sparse LSTMs. Related paper is publised in ICLR 2018: Learning Intrinsic Sparse Structures within Long Short-term Memory. Both structurally sparse LSTMs and non-structurally sparse LSTMs are supported by the code. The work on sparse CNNs is available here. Poster is here.

We use L1-norm regularization to obtain non-structurally sparse LSTMs. The effectiveness of L1-norm regularization is similar to connection pruning, which can significantly reduce parameters in LSTMs but the irregular pattern of non-zero weights may not be friendly for computation efficiency.

We use group Lasso regularization to obtain structurally sparse LSTMs. It can both reduce parameters in models and obtain regular nonzero weights for fast computation.

We proposed Intrinsic Sparse Structures (ISS) in LSTMs. By removing one component of ISS, we can simultaneously remove one hidden state, one cell state, one forget gate, one input gate, one output gate and one input update. In this way, we get a regular LSTM but with hidden size reduced by one. The method of learning ISS is based on group Lasso regularization. The ISS approach is also extended to Recurrent Highway Networks to learn the number of units per layer.

Examples

Stacked LSTMs

Code in ptb is stacked LSTMs for language modeling of Penn TreeBank dataset.

Recurrent Highway Networks

Code in rhns is ISS for Recurrent Highway Networks. ISS is proposed in LSTMs but can be easily extended to other recurrent neural networks like Recurrent Highway Networks.

Attention model

Code in bidaf is an attention+LSTM model for Question Answering of SQuAD dataset.