Recurrent Back Propagation
Switch branches/tags
Nothing to show
Clone or download
rjliao
rjliao first commit
Latest commit 43bc0ab Jul 10, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
config first commit Jul 9, 2018
data/citation first commit Jul 9, 2018
dataset first commit Jul 9, 2018
model first commit Jul 9, 2018
operators first commit Jul 9, 2018
runner first commit Jul 9, 2018
utils first commit Jul 9, 2018
LICENSE Initial commit Jul 9, 2018
README.md first commit Jul 9, 2018
run_exp.py first commit Jul 9, 2018
setup.sh first commit Jul 9, 2018

README.md

RBP

This is the PyTorch implementation of Recurrent Back Propagation as described in the following paper:

@article{liao2018reviving,
  title={Reviving and Improving Recurrent Back-Propagation},
  author={Liao, Renjie and Xiong, Yuwen and Fetaya, Ethan and Zhang, Lisa and Yoon, KiJung and Pitkow, Xaq and Urtasun, Raquel and Zemel, Richard},
  journal={arXiv preprint arXiv:1803.06396},
  year={2018}
}

Setup

To set up experiments, we need to build our customized operators by running the following scripts:

./setup.sh

Dependencies

Python 3, PyTorch(0.4.0)

Run Demos

  • To run experiments X where X is one of {hopfield, cora, pubmed, hypergrad}:

    python run_exp.py -c config/X.yaml

Notes:

  • Most hyperparameters in the configuration yaml file are self-explanatory.
  • To switch between BPTT, TBPTT and RBP variants, you need to specify grad_method in the config file.
  • Conjugate gradient based RBP requires support of forward mode auto-differentiation which we only provided for the experiments of Hopfield networks and graph neural networks (GNNs). You can check the comments in model/rbp.py for more details.

Cite

Please cite our paper if you use this code in your research work.

Questions/Bugs

Please submit a Github issue or contact rjliao@cs.toronto.edu if you have any questions or find any bugs.