No description, website, or topics provided.
Python Shell
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
DockerFiles Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
benchmark 1. Print summary after tests. Dec 1, 2016
dawnbench Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
image_classification Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
onnx_benchmark Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
tensorflow 1. Use synthetic data for training. Nov 26, 2016
tensorflow_benchmark/tf_cnn_benchmarks Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
test Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
utils Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
word_language_model Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
LICENSE Initial commit Nov 25, 2016
README.md Update README.md Nov 29, 2016
README_BENCHMARKAI.md Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
benchmark_driver.py Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
benchmark_runner.py Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
requirement.txt Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018
task_config_template.cfg Moving the scripts for benchmark to awslabs managed github repository ( Jun 4, 2018

README.md

Scalability Comparison Scripts for Deep Learning Frameworks

This repository contains scripts that compares the scalability of deep learning frameworks.

The scripts train Inception v3 and AlexNet using synchronous stochastic gradient descent (SGD). To run the comparison in reasonable time, we run few tens of iterations of SGD and compute the throughput as images processed per second.

Comparisons can be done on clusters created with AWS CloudFormation using the Amazon Deep Learning AMI.

###To run comparisons in a deep learning cluster created with CloudFormation

Step 1: Create a deep learning cluster using CloudFormation.

Step 2: Log in to the master instance using SSH, including the -A option to enable SSH agent forwarding. Example: ssh -A masternode

Step 3: Run the following command: git clone https://github.com/awslabs/deeplearning-benchmark.git && cd deeplearning-benchmark/benchmark/ && bash runscalabilitytest.sh

The runscalabilitytest.sh script runs scalability tests and records the throughput as images/sec in CSV files under 'csv_*' directories. Each line in the CSV file contains a key-value pair, where the key is the number of GPUs the test was run on and the value is the images processed per second. The script also plots this data in a SVG file named comparison_graph.svg.

Note: The following mini-batch sizes are used by default:

P2 Instance G2 Instance
Inception v3 32 8
Alexnet 512 128

Mini-batch size can be changed using the --models switch. For example to run Inception-v3 with a batch size of 16 and AlexNet with a batch size of 256, run the following: bash runscalabilitytest.sh --models "Inceptionv3:16,Alexnet:256".

To run training across multiple machines, the scripts use parameter servers to update parameters. It is possible to get better performance on a single machine by not using the parameter servers. For simplicity, these scripts don't run different code optimized for a single machine for tests that run on single machine, given that we are interested only in distributed performance across multiple machines. This should not affect the results for distributed training.