TensorFlow implementation of a decentralized distributed deep learning, AKO.
Switch branches/tags
Nothing to show
Clone or download
Latest commit ffbddbd Aug 10, 2018
Failed to load latest commit information.
LICENSE Create License Jun 14, 2018
README.md Update readme.md Aug 9, 2018
redis_ako.py Initial Commit Jun 14, 2018
redis_ako_cluster.py Initial Commit Jun 14, 2018
redis_ako_config.py Initial Commit Jun 14, 2018
redis_ako_model.py Initial Commit Jun 14, 2018
redis_ako_queue.py Initial Commit Jun 14, 2018


Decentralized Distributed Deep Learning (DL) in TensorFlow

This is a TensorFlow implementation of Ako (Ako: Decentralised Deep Learning with Partial Gradient Exchange). You can train any DNNs in a decentralized manner without parameter servers. Workers exchange partitioned gradients directly with each other without help of parameter servers and update their own local weights. Please refer the original paper Ako or our project home for more details.


  • Environments

    • ubuntu 16.04
    • Python 2.7
    • Tensorflow 1.4
  • Prerequisites

    • redis-server & redis-client
    • tflearn (only for loading CIFAR10 dataset)
      $ sudo apt-get update
      $ sudo apt-get install redis-server -y
      $ sudo pip install redis
      $ sudo pip install tflearn

How to run

  1. Build your model in redis_ako_model.py
  2. Write your session and load your dataset in redis_ako.py
  3. Change your configurations in redis_ako_config.py
    • Basic configurations: Cluster IP/Port, Redis port, Synchronous training, Training epochs, Batch size, Number of batches
    • Ways to train models: training a few iterations, training for a fixed time, training until a fixed accuracy
    • Ako specific configurations: P values, partition details, SSP interation bound, Number of queue threads
  4. Execute it
    # When 3 workers are clustered and used for decentralized DL
    # At worker 0
    $ python redis_ako.py wk 0 
    # At worker 1
    $ python redis_ako.py wk 1
    # At worker 2
    $ python redis_ako.py wk 2