Skip to content
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
inference
README.md
__init__.py

README.md

DCGAN

This document has instructions for how to run DCGAN for the following modes/precisions:

Benchmarking instructions and scripts for model training and inference.

FP32 Inference Instructions

  1. Clone the tensorflow/models repository:
$ git clone https://github.com/tensorflow/models.git

The TensorFlow models repo will be used for running inference as well as converting the CIFAR-10 dataset to the TF records format.

  1. Follow the TensorFlow models Generative Adversarial Network (GAN) instructions to download and convert the CIFAR-10 dataset.

  2. Download and extract the pretrained model:

    $ wget https://storage.googleapis.com/intel-optimized-tensorflow/models/dcgan_fp32_unconditional_cifar10_pretrained_model.tar.gz
    $ tar -xvf dcgan_fp32_unconditional_cifar10_pretrained_model.tar.gz
    
  3. Clone this intelai/models repository:

$ git clone https://github.com/IntelAI/models.git

This repository includes launch scripts for running benchmarks and the an optimized version of the DCGAN model code.

  1. Navigate to the benchmarks directory in your local clone of the intelai/models repo from step 4. The launch_benchmark.py script in the benchmarks directory is used for starting a benchmarking run in a optimized TensorFlow docker container. It has arguments to specify which model, framework, mode, precision, and docker image to use, along with your path to the external model directory for --model-source-dir (from step 1) --data-location (from step 2), and --checkpoint (from step 3).

Run benchmarking for throughput and latency with --batch-size=100 :

$ cd /home/<user>/models/benchmarks

$ python launch_benchmark.py \
    --model-source-dir /home/<user>/tensorflow/models \
    --model-name dcgan \
    --framework tensorflow \
    --precision fp32 \
    --mode inference \
    --batch-size 100 \
    --socket-id 0 \
    --checkpoint /home/<user>/dcgan_fp32_unconditional_cifar10_pretrained_model \
    --data-location /home/<user>/cifar10 \
    --docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl
  1. Log files are located at the value of --output-dir.

Below is a sample log file tail when running benchmarking for throughput:

Batch size: 100 
Batches number: 500
Time spent per BATCH: 35.8268 ms
Total samples/sec: 2791.2030 samples/s
lscpu_path_cmd = command -v lscpu
lscpu located here: /usr/bin/lscpu
Ran inference with batch size 100
Log location outside container: {--output-dir value}/benchmark_dcgan_inference_fp32_20190117_220342.log
You can’t perform that action at this time.