Skip to content

YisenWang/LightRNN-NIPS2016-Tensorflow_code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

LightRNN-NIPS2016-Tensorflow_code

The tensorflow implementation of NIPS2016 paper "LightRNN: Memory and Computation-Efficient Recurrent Neural Networks" (https://arxiv.org/abs/1610.09893)

LightRNN: Memory and Computation-Efficient Recurrent Neural Networks

To address the both model size and running time, especially for text corpora with large vocabularies, the author proposed to use 2-Component (2C) shared embedding for word representations. They allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need 2 \sqrt(V) vectors to represent a vocabulary of |V| unique words, which are far less than the |V| vectors required by existing approaches. The LightRNN algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models).

More details please refer the LightRNN paper (https://arxiv.org/abs/1610.09893)

Update

  • For some reasons, I clear the repo. Maybe available in the future.

About

The tensorflow implementation of NIPS2016 paper "LightRNN: Memory and Computation-Efficient Recurrent Neural Networks" (https://arxiv.org/abs/1610.09893)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published