Skip to content
PyTorch Implementation of Distributed Prioritized Experience Replay(Ape-X)
Branch: master
Clone or download
Latest commit 9a04abf Mar 15, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
figs changed readme and edited Mar 15, 2019
.gitignore changed Mar 15, 2019 First Commit Mar 14, 2019 First Commit Mar 14, 2019 First Commit Mar 14, 2019 First Commit Mar 14, 2019 First Commit Mar 14, 2019


An Implementation of Distributed Prioritized Experience Replay (Horgan et al. 2018) in PyTorch.

The paper proposes a distributed architecture for deep reinforcement learning with distributed prioritized experience replay. This enables a fast and broad exploration with many actors, which prevents model from learning suboptimal policy.

There are a few implementations which are optimized for powerful single machine with a lot of cores but I tried to implement Ape-X in a multi-node situation with AWS EC2 instances. ZeroMQ, AsyncIO, Multiprocessing are really helpful tools for this.

There are still performance issues with replay server which are caused by the shared memory lock and hyperparameter tuning but this works anyway. Also, there are still some parts I hard-coded for convenience. I'm trying to improve many parts and really appreciate your help.


python 3.7
  1. numpy=1.16.0 version has memory leak issue with pickle so try not to use numpy 1.16.0.
  2. CPU performance of pytorch-nightly-cpu from conda is much better than normal torch
  3. tensorflow is necessary to use tensorboardX.

Overall Structure



image training_speed eval

Seaquest result trained with 192 actors. Due to the slow training speed(10~12 batches/s instead of 19 batches/s in paper), It was not possible to reproduce the same result as the paper. But it shows dramatic increase over my baseline implementations(rainbow, acer)

Added gif to show how agent really acts and scores in SeaquestNoFrameskip-v4. I recently noticed that the performance(score) of actor is much better in evaluation setting(epsilon=0.) than the plot 1.

How To Use

Single Machine

My focus was to run Ape-X in a multi-node environment but you can run this model with powerful single machine. I have not run any experiment with single machine so I'm not sure you can achieve satisfactory performance/result. For details, you can see included in this repo.

Multi-Node with AWS EC2

Be careful not to include your private AWS secret/access key in public repository while following instructions!


Packer is a useful tool to build automated machine images. You'll be able to make AWS AMI with a few line of json formatted file and shell script. There are a lot of more available features in packer's website. I've made all necessary files in deploy/packer directory. If you're not interested in using packer, I already included pre-built AMI in so you can skip this part.

  1. Enter your secrey/access key with appropriate IAM policy in variables.json file.
  2. run below commands in parallel.
    packer build -var-file=variables.json ape_x_actor.json
    packer build -var-file=variables.json ape_x_replay.json
    packer build -var-file=variables.json ape_x_learner.json
  3. You can see AMIs are created in your AWS account.


Terraform is a useful IaC(Infrastructure as Code) tool. You can start multiple instances with only one command terraform apply and destroy all instances with terraform destroy. For more information, See Terraform's website tutorial and documentation. I have already included all necessary commands in deploy directory. Important files are,, terraform.tfvars.

  1. Read and enter necessary values to terraform.tfvars. Values included in terraform.tfvars will override any default values in
  2. Change EC2 instance type in to meet your budget.
  3. Run terraform init in deploy directory.
  4. Run terraform apply and terraform will magically create all necessary instances and training will start.
  5. To see how trained model works, See tensorboard which includes actor with larget actor id which has smallest epsilon value. You could easily access tensorboard by entering http://public_ip:6006. Or you could add a evaluator node with a new instance but this costs you more money :(

Thanks to

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.