Skip to content
Code for the 2-simplicial Transformer paper
Python Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
agent Remove transformer_style reference Sep 1, 2019
env Updates Aug 30, 2019
notebooks Removed cell output Sep 1, 2019
python Added ray files Aug 31, 2019
README.md Update README.md Sep 8, 2019
notes-implementation.md Added notes-implementation Aug 31, 2019

README.md

Code repository for the 2-simplicial Transformer

This is the public repository for the paper "Logic and the 2-Simplicial Transformer" by James Clift, Dmitry Doryn, Daniel Murfet and James Wallbridge. The initial release contains the simplicial and relational agents, environment, training notebooks and videos of rollouts of the trained agents. This is research code and some assembly may be required. If you have problems getting the code to run, or want to request additional data not provided here, please email Daniel.

Main files:

  • Relational agent: agent/agent_relational.py
  • Simplicial agent: agent/agent_simplicial.py
  • Environment: env/bridge_boxworld.py
  • Training notebooks: notebooks/
  • Videos (see below)
  • Trained agent weights (not yet available)

There is a brief training guide in notebooks/training.ipynb and brief installation instructions below. In notes-implementation.md we collect various notes about training agents with IMPALA in Ray RLlib that might be useful (but as the Ray codebase is evolving quickly, many of the class names in these notes may now be incorrect). Note that we use a patched version of several of the files from RLlib, see the installation instructions for details.

For background on some of the ideas from neuroscience that partly inspired this work, see the talk "Building models of the world for behavioural control" by Tim Behrens from Cosyne 2018.

Videos

The video rollouts are provided for the best training run of the simplicial agent (simplicial agent A of the paper). The videos are organised by puzzle type, with 335C meaning the third episode sampled on puzzle type 335. Videos are not cherry-picked, and include episodes where the agent opens the bridge. There are three episodes of every puzzle type, and extras for the harder puzzles 335, 336. Figure 6 of the paper is step 8 of episode 335A, Figure 7 is step 18 of 325C, Figure 8 is step 13 of episode 335A, Figure 9 is step 29 of episode 335E.

Trained agent weights

not yet available

In the experiments folder we collect some checkpoints of the eight agents described in the paper. Reconstructing the agent from these checkpoints requires some expertise with Ray RLlib.

  • simplicial agent A = 30-7-19-A
  • simplicial agent B = 1-8-19-A
  • simplicial agent C = 23-7-19-A
  • simplicial agent D = 13-8-19-C
  • relational agent A = 4-8-19-A
  • relational agent B = 12-6-19-A
  • relational agent C = 13-8-19-A
  • relational agent D = 13-6-19-C

For some of the agents the very last checkpoint is "bad", in the sense that the winrate decreased from its converged value (this is due to our use of a fixed learning rate over the entire course of training), and we are distributing the last good checkpoint, as well as a sample of earlier checkpoints. We are happy to share the entire checkpoint history, but these files approach 500Mb for some of the agents and we do not currently have a good distribution method. Nonetheless if you want the files get in touch and we can work something out.

Caveats

  • The current implementation of the simplicial agent agent_simplicial.py assumes one head of 2-simplicial attention.

Installation

The following instructions assume you know how to set up TensorFlow, and cover the other aspects of setting up a blank GCP or AWS instance to a point where they can run our training notebooks. Our training was done under Ray version 0.7.0.dev2 and TensorFlow 1.13.1 and we do not make any assurances that the code will even run on later versions. As detailed in the paper, our head nodes (the ones on which we run the training notebooks) have either a P100 or K80 GPU, and the worker nodes have no GPU.

sudo apt-get update
sudo apt install python-pip
sudo apt install python3-dev python3-pip
sudo apt install cmake
sudo apt-get install zlib1g-dev
sudo apt install git
pip3 install --user tensorflow-gpu
pip3 install -U dask
pip3 install --user ray[rllib]
pip3 install --user ray[debug]
pip3 install jupyter
pip3 install -U matplotlib
pip3 install psutil
pip3 install --upgrade gym
sudo apt-get install ffmpeg
sudo apt-get install pssh
sudo apt-get install keychain
pip3 install -U https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-0.7.0.dev2-cp36-cp36m-manylinux1_x86_64.whl

On the CPU-only machines use pip3 install --user tensorflow

More installation:

git clone https://github.com/dmurfet/2simplicialtransformer.git
git clone https://github.com/kpot/keras-transformer.git
cd keras-transformer;pip3 install --user .

Reboot after this to fix the PATH. You'll also need to open the port 6379 for Redis and the 8888 port for Jupyter in the console Security Groups tab, otherwise RLlib won't be able to initialise the cluster (resp. the Jupyter notebook will not be remotely accessible).

Jupyter setup (for head nodes only): To set up Jupyter as a remote service, follow these instructions (including making a keypair) except you need to use c.NotebookApp.ip = '0.0.0.0' rather than c.NotebookApp.ip = '*' as they say. To get Jupyter to run on startup you'll need to first create an rc.local file (on Ubuntu 18 this is no longer shipped standard) see this. Then add this line to rc.local

cd /home/ubuntu && su ubuntu -c "/home/ubuntu/.local/bin/jupyter notebook &"
You can’t perform that action at this time.