Skip to content
Pip-installable differentiable stacks in PyTorch!
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
stacknn
.gitignore
README.md
requirements.txt Updated README.md and setup.py. Oct 7, 2019
setup.py

README.md

stacknn-core: The Successor to StackNN

This library implements differentiable stacks and queues in PyTorch. It is a light-weight version of StackNN that is easier to install and optimized for faster training. The API is also straightforward. For example, to construct a differentiable stack and perform a push, all you have to do is:

from stacknn.structs import Stack
stack = Stack(BATCH_SIZE, STACK_VECTOR_SIZE)
read_vectors = stack(value_vectors, pop_strengths, push_strengths)

For more complex use cases, refer to the (old) StackNN or industrial-stacknns repositories.

All the code in this repository is associated with the paper Context-Free Transductions with Neural Stacks, which appeared at the Analyzing and Interpreting Neural Networks for NLP workshop at EMNLP 2018. Refer to our paper for more theoretical background on differentiable data structures.

Installation

pip install git+https://github.com/viking-sudo-rm/stacknn-core

The library only supports Python 3 and depends on numpy and torch.

Contributing

We welcome contributions from outside in the form of pull requests. Please report any bugs in the GitHub issues tracker.

If you are a Yale student interested in joining Computational Linguistics at Yale for this or another project, please contact Bob Frank.

Citations

If you use this codebase in your research, please cite the associated paper:

@inproceedings{hao-etal-2018-context,
    title = "Context-Free Transductions with Neural Stacks",
    author = "Hao, Yiding  and
      Merrill, William  and
      Angluin, Dana  and
      Frank, Robert  and
      Amsel, Noah  and
      Benz, Andrew  and
      Mendelsohn, Simon",
    booktitle = "Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}",
    month = nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/W18-5433",
    pages = "306--315",
    abstract = "This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex stack-augmented networks often find approximate solutions by using the stack as unstructured memory.",
}

Acknowledgements

Thanks to the various members of Computational Linguistics at Yale who contributed to the various iterations of this library. All the contributors are listed on the Contributors page.

Unit Tests

To run the unit tests for this library, execute the follow command from the root stacknn-core directory:

python -m unittest
You can’t perform that action at this time.