Skip to content

An example of data parallelism and async updates of parameter in tensorflow.

Notifications You must be signed in to change notification settings

ischlag/distributed-tensorflow-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 

Repository files navigation

Distributed Tensorflow 1.2 Example (DEPRECATED)

Using data parallelism with shared model parameters while updating parameters asynchronous. See comment for some changes to make the parameter updates synchronous (not sure if the synchronous part is implemented correctly though).

Trains a simple sigmoid Neural Network on MNIST for 20 epochs on three machines using one parameter server. The goal was not to achieve high accuracy but to get to know tensorflow.

Run it like this:

First, change the hardcoded host names with your own and run the following commands on the respective machines.

pc-01$ python example.py --job_name="ps" --task_index=0 
pc-02$ python example.py --job_name="worker" --task_index=0 
pc-03$ python example.py --job_name="worker" --task_index=1 
pc-04$ python example.py --job_name="worker" --task_index=2 

Thanks to snowsquizy for updating the script to TensorFlow 1.2.

About

An example of data parallelism and async updates of parameter in tensorflow.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages