Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
5b3ae92
Add lm-1b FP32 inference benchmarking scripts (#254)
guizili0 Apr 4, 2019
d8f9014
Allow overwriting the KMP_* env vars in SSD-MobileNet Int8 script (#267)
dmsuehir Apr 4, 2019
8802bc6
Add Contribute.md doc with instructions on adding a new model (#266)
dmsuehir Apr 4, 2019
86a8eb4
Add note about user set env vars on bare metal (#268)
dmsuehir Apr 5, 2019
a7dc810
ssd-mobilenet int8 inference data-location for accuracy to take full …
mjkyung Apr 9, 2019
e922a53
Fix links to inference and preprocessing files for ResNet50 and ResNe…
dmsuehir Apr 10, 2019
690e261
fix a typo (#277)
mjkyung Apr 10, 2019
5e8d35e
Mobilenet V1 Int8 Inference (#264)
mjkyung Apr 11, 2019
63c1a9c
Add deprecation warning for checkpoint argument (#278)
dmsuehir Apr 11, 2019
66256b5
Change Inception ResNet V2 FP32 to use the frozen graph for benchmark…
dmsuehir Apr 11, 2019
c62be36
Fix input_height/width arg setup for MobileNet V1 Int8 inference (#280)
mjkyung Apr 12, 2019
12c35fa
Add support for custom volumes (#279)
dmsuehir Apr 12, 2019
66e4862
Fix launch_benchmark.py --help output so that it doesn't require othe…
dmsuehir Apr 12, 2019
76fdf16
Clean up log snippets in docs (#283)
dmsuehir Apr 12, 2019
059dc96
MobileNet V1 INT8 Inference README.md frozen graph info update (#284)
mjkyung Apr 12, 2019
759608d
Add default config as json file (#272)
Apr 15, 2019
c0d1fed
Add support for dummy data with MobileNet V1 FP32 (#275)
dmsuehir Apr 15, 2019
fdee53e
Use --no-cache-dir option during pip and virtualenv install (#285)
ashahba Apr 15, 2019
c827585
Add DenseNet 169 FP32 inference benchmarking scripts (#281)
mjkyung Apr 17, 2019
7db6647
Add support for TCMalloc (#287)
dmsuehir Apr 18, 2019
1659cdb
Add SSD-VGG16 COCO int8/fp32 inference benchmarks (#286)
WafaaT Apr 23, 2019
f4fd7a2
Make TCMalloc enabled for int8 by default, but disabled for other pre…
dmsuehir Apr 25, 2019
c21b9ed
add the required dependencies for coco dataset conversion to tf recor…
WafaaT Apr 26, 2019
9f6387d
update object detection models readme for dataset converion. (#293)
WafaaT Apr 26, 2019
3bffb2a
Update Int8 docs to reflect use of tcmalloc (#291)
mhbuehler Apr 29, 2019
b98fc4b
add a reference publication for the ssd_vgg16 doc. (#295)
WafaaT Apr 30, 2019
6d068b7
Fixes tutorial link and text (#296)
mhbuehler Apr 30, 2019
d2547e5
Adds TF Transformer-LT tutorial (#247)
nathan-greeneltch-intel May 1, 2019
9f0ee3d
Add instructions to download and convert coco dataset to TF records u…
WafaaT May 3, 2019
339e8ba
Use model-based JSON files for unit tests args (#294)
WafaaT May 6, 2019
850a003
Update docker images in README files use to TF 1.14 (#297)
dmsuehir May 7, 2019
37958bd
Update FasterRCNN Int8 README file to note benchmarking uses raw imag…
dmsuehir May 9, 2019
94edbc7
fix docker build command (#306)
jitendra42 May 13, 2019
5e19f8a
ADD: Tensorflow Serving Benchmarking (#307)
May 16, 2019
cb2bb07
Make reference file optional for Transformer LT benchmarking (#312)
dmsuehir May 22, 2019
dbc54be
Add SSD-ResNet34 Int8 benchmarking and refactor FP32 code (#301)
guizili0 May 23, 2019
1ecd87b
Enabling ResNet50v1.5 model for FP32 and INT8 (#309)
nhasabni May 23, 2019
3db66e1
Add link download the MobileNet v1 Int8 pretrained model (#313)
dmsuehir May 23, 2019
59dbbda
Trivial update to benchmark README (#315)
dmsuehir May 24, 2019
59563be
Add link to download the DenseNet 169 pretrained model (#318)
dmsuehir May 28, 2019
4adab61
Add iteration time to accuracy scripts (#317)
lwencel May 30, 2019
a2b26ee
Adds TF Serving Transformer-LT Tutorial (#302)
mhbuehler May 30, 2019
f244ec2
Merge branch 'master' of https://github.com/NervanaSystems/intel-mode…
mhbuehler May 31, 2019
8c88b14
Merge pull request #323 from NervanaSystems/melanie/pull_master
dmsuehir May 31, 2019
e4a7f4f
Update verbiage in new READMEs, precisions, tutorials, etc. (#324)
mhbuehler Jun 3, 2019
634d8df
fix one of the data location references in readme (#325)
WafaaT Jun 7, 2019
058f0bf
Add ResNet50 int8 TF Serving Tutorial (#314)
WafaaT Jun 11, 2019
d01c39b
Make the launch script executable (#326)
dmsuehir Jun 11, 2019
a37f48f
Ubuntu 18 tzdata fix (#310)
claynerobison Jun 12, 2019
17a5ccc
Update Transformer LT Official to support num_inter and num_intra thr…
cuixiaom Jun 13, 2019
194e011
TFServing SSD-MobileNet Tutorial (#311)
mhbuehler Jun 13, 2019
ab5c13d
Add arg validation for paths in generate_coco_records.py (#328)
dmsuehir Jun 13, 2019
f0aa7ab
Specify scipy==1.2.1 for MaskRCNN (#329)
dmsuehir Jun 14, 2019
c895e47
Remove grpc package from tfserving dependencies (#330)
mhbuehler Jun 14, 2019
fc16c08
fix the path to the calibration script for resnet101 int8. (#332)
WafaaT Jun 18, 2019
d6c0cb8
NCF doc hotfix (#334)
mhbuehler Jun 18, 2019
2f46653
BKC for mobilenet-v1 int8 inference (#333)
wenxizhu Jun 21, 2019
41977d7
TF Serving: tf version fix (#337)
jitendra42 Jun 21, 2019
6a13ce8
Install the development package for google-perftools (#338)
ashahba Jun 24, 2019
fba107a
Update TF image tag and updates due to using a non-dev container (#339)
dmsuehir Jun 25, 2019
51baf07
Update lm-1b README due to branch and path changes (#343)
dmsuehir Jun 27, 2019
2aa6204
Update README files to use tf-cpu.1-14 docker image (#346)
dmsuehir Jul 2, 2019
f2cc76d
Update Pillow version and py3 fix (#351)
dmsuehir Jul 2, 2019
53e25d0
Updating docker images that were missed earlier (#352)
dmsuehir Jul 3, 2019
632c39d
Merge branch 'r1.4' of github.com:NervanaSystems/intel-models into r1.4
jitendra42 Jul 3, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@
.coverage
.tox
test_data/
*.bak
191 changes: 191 additions & 0 deletions Contribute.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
# Contributing to the Model Zoo for Intel® Architecture

## Adding scripts for a new TensorFlow model

### Code updates

In order to add a new model to the zoo, there are a few things that are
required:

1. Setup the directory structure to allow the
[launch script](/docs/general/tensorflow/LaunchBenchmark.md) to find
your model. This involves creating folders for:
`/benchmarks/<use case>/<framework>/<model name>/<mode>/<precision>`.
Note that you will need to add `__init__.py` files in each new
directory that you add, in order for python to find the code.

![Directory Structure](benchmarks_directory_structure.png)

2. Next, in the leaf folder that was created in the previous step, you
will need to create `config.json` and `model_init.py` files:

![Add model init](add_model_init_and_config.png)

The `config.json` file contains the best known KMP environment variable
settings to get optimal performance for the model. Below default settings are recommended for most of
the models in Model Zoo.

```
{
"optimization_parameters": {
"KMP_AFFINITY": "granularity=fine,verbose,compact,1,0",
"KMP_BLOCKTIME": 1,
"KMP_SETTINGS": 1
}
}
```

The `model_init.py` file is used to initialize the best known configuration for the
model, and then start executing inference or training. When the
[launch script](/docs/general/tensorflow/LaunchBenchmark.md) is run,
it will look for the appropriate `model_init.py` file to use
according to the model name, framework, mode, and precision that are
specified by the user.

The contents of the `model_init.py` file will vary by framework. For
TensorFlow models, we typically use the
[base model init class](/benchmarks/common/base_model_init.py) that
includes functions for doing common tasks such as setting up the best
known environment variables (like `KMP_BLOCKTIME`, `KMP_SETTINGS`,
`KMP_AFFINITY` by loading **config.json** and `OMP_NUM_THREADS`), num intra threads, and num
inter threads. The `model_init.py` file also sets up the string that
will ultimately be used to run inference or model training, which
normally includes the use of `numactl` and sending all of the
appropriate arguments to the model's script. Also, if your model
requires any non-standard arguments (arguments that are not part of
the [launch script flags](/docs/general/tensorflow/LaunchBenchmark.md#launch_benchmarkpy-flags)),
the `model_init.py` file is where you would define and parse those
args.

3. [start.sh](/benchmarks/common/tensorflow/start.sh) is a shell script
that is called by the `launch_benchmarks.py` script in the docker
container. This script installs dependencies that are required by
the model, sets up the `PYTHONPATH` environment variable, and then
calls the [run_tf_benchmark.py](/benchmarks/common/tensorflow/run_tf_benchmark.py)
script with the appropriate args. That run script will end up calling
the `model_init.py` file that you have defined in the previous step.

To add support for a new model in the `start.sh` script, you will
need to add a function with the same name as your model. Note that
this function name should match the `<model name>` folder from the
first step where you setup the directories for your model. In this
function, add commands to install any third-party dependencies within
an `if [ ${NOINSTALL} != "True" ]; then` conditional block. The
purpose of the `NOINSTALL` flag is to be able to skip the installs
for quicker iteration when running on bare metal or debugging. If
your model requires the `PYTHONPATH` environment variable to be setup
to find model code or dependencies, that should be done in the
model's function. Next, setup the command that will be run. The
standard launch script args are already added to the `CMD` variable,
so your model function will only need to add on more args if you have
model-specific args defined in your `model_init.py`. Lastly, call the
`run_model` function with the `PYTHONPATH` and the `CMD` string.

Below is a sample template of a `start.sh` model function that
installs dependencies from `requirements.txt` file, sets up the
`PYHTONPATH` to find model source files, adds on a custom steps flag
to the run command, and then runs the model:
```bash
function <model_name>() {
if [ ${PRECISION} == "fp32" ]; then
if [ ${NOINSTALL} != "True" ]; then
pip install -r ${MOUNT_EXTERNAL_MODELS_SOURCE}/requirements.txt
fi

export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
CMD="${CMD} $(add_steps_args)"
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
exit 1
fi
}
```

Optional step:
* If there is CPU-optimized model code that has not been upstreamed to
the original repository, then it can be added to the
[models](/models) directory in the zoo repo. As with the first step
in the previous section, the directory structure should be setup like:
`/models/<use case>/<framework>/<model name>/<mode>/<precision>`.

![Models Directory Structure](models_directory_structure.png)

If there are model files that can be shared by multiple modes or
precisions, they can be placed the higher-level directory. For
example, if a file could be shared by both `FP32` and `Int8`
precisions, then it could be placed in the directory at:
`/models/<use case>/<framework>/<model name>/<mode>` (omitting the
`<precision>` directory). Note that if this is being done, you need to
ensure that the license that is associated with the original model
repository is compatible with the license of the model zoo.

### Debugging

There are a couple of options for debugging and quicker iteration when
developing new scripts:
* Use the `--debug` flag in the launch_benchmark.py script, which will
give you a shell into the docker container. See the
[debugging section](/docs/general/tensorflow/LaunchBenchmark.md#debugging)
of the launch script documentation for more information on using this
flag.
* Run the launch script on bare metal (without a docker container). The
launch script documentation also has a
[section](/docs/general/tensorflow/LaunchBenchmark.md#alpha-feature-running-on-bare-metal)
with instructions on how to do this. Note that when running without
docker, you are responsible for installing all dependencies on your
system before running the launch script. If you are using this option
during development, be sure to also test _with_ a docker container to
ensure that the `start.sh` script dependency installation is working
properly for your model.

### Documentation updates

1. Create a `README.md` file in the
`/benchmarks/<use case>/<framework>/<model name>` directory:

![Add README file](add_readme.png)

This README file should describe all of the steps necessary to run
the model, including downloading and preprocessing the dataset,
downloading the pretrained model, cloning repositories, and running
the model script with the appropriate arguments. Most models
have best known settings for batch and online inference performance
testing as well as testing accuracy. The README file should specify
how to set these configs using the `launch_benchmark.py` script.

2. Update the table in the [main `benchmarks` README](/benchmarks/README.md)
with a link to the model that you are adding. Note that the models
in this table are ordered alphabetically by use case, framework, and
model name. The model name should link to the original paper for the
model. The instructions column should link to the README
file that you created in the previous step.

### Testing

1. After you've completed the above steps, run the model according to
instructions in the README file for the new model. Ensure that the
performance and accuracy metrics are on par with what you would
expect.

2. Add unit tests to cover the new model.
* For TensorFlow models, there is a
[parameterized test](/tests/unit/common/tensorflow/test_run_tf_benchmarks.py#L80)
that checks the flow running from `run_tf_benchmarks.py` to the
inference command that is executed by the `model_init.py` file. The
test ensures that the inference command has all of the expected
arguments.

To add a new parameterized instance of the test for your
new model, add a new JSON file `tf_<model_name>_args.json` to the [tf_models_args](/tests/unit/common/tensorflow/tf_model_args)
directory. Each file has a list of dictionaries, a dictionary has three
items: (1) `_comment` a comment describes the command,
(2) `input` the `run_tf_benchmarks.py` command with the appropriate
flags to run the model (3) `output` the expected inference or training
command that should get run by the `model_init.py` file.
* If any launch script or base class files were changed, then
additional unit tests should be added.
* Unit tests and style checks are run when you post a GitHub PR, and
the tests must be passing before the PR is merged.
* For information on how to run the unit tests and style checks
locally, see the [tests documentation](/tests/README.md).
4 changes: 2 additions & 2 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ node('skx') {
sudo apt-get install -y python3-dev || sudo yum install -y python36-devel.x86_64

# virtualenv 16.3.0 is broken do not use it
python2 -m pip install --force-reinstall --user --upgrade pip virtualenv!=16.3.0 tox
python3 -m pip install --force-reinstall --user --upgrade pip virtualenv!=16.3.0 tox
python2 -m pip install --no-cache-dir --user --upgrade pip==19.0.3 virtualenv!=16.3.0 tox
python3 -m pip install --no-cache-dir --user --upgrade pip==19.0.3 virtualenv!=16.3.0 tox
"""
}
stage('Style tests') {
Expand Down
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ This repository contains **links to pre-trained models, sample scripts, best pra
- Show how to efficiently execute, train, and deploy Intel-optimized models
- Make it easy to get started running Intel-optimized models on Intel hardware in the cloud or on bare metal

***DISCLAIMER: These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit [https://www.intel.ai/blog](https://www.intel.ai/blog).***
***DISCLAIMER: These scripts are not intended for benchmarking Intel platforms.
For any performance and/or benchmarking information on specific Intel platforms, visit [https://www.intel.ai/blog](https://www.intel.ai/blog).***

## How to Use the Model Zoo

Expand All @@ -31,3 +32,6 @@ We hope this structure is intuitive and helps you find what you are looking for;
![Repo Structure](repo_structure.png)

*Note: For model quantization and optimization tools, see [https://github.com/IntelAI/tools](https://github.com/IntelAI/tools)*.

## How to Contribute
If you would like to add a new benchmarking script, please use [this guide](/Contribute.md).
Binary file added add_model_init_and_config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added add_readme.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 18 additions & 5 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,31 +11,44 @@ dependencies to be installed:
* [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* `wget` for downloading pre-trained models

## Use Cases
## TensorFlow Use Cases

| Use Case | Framework | Model | Mode | Instructions |
| -----------------------| --------------| ------------------- | --------- |------------------------------|
| Adversarial Networks | TensorFlow | [DCGAN](https://arxiv.org/pdf/1511.06434.pdf) | Inference | [FP32](adversarial_networks/tensorflow/dcgan/README.md#fp32-inference-instructions) |
| Content Creation | TensorFlow | [DRAW](https://arxiv.org/pdf/1502.04623.pdf) | Inference | [FP32](content_creation/tensorflow/draw/README.md#fp32-inference-instructions) |
| Face Detection and Alignment | Tensorflow | [FaceNet](https://arxiv.org/pdf/1503.03832.pdf) | Inference | [FP32](face_detection_and_alignment/tensorflow/facenet/README.md#fp32-inference-instructions) |
| Face Detection and Alignment | TensorFlow | [MTCC](https://arxiv.org/pdf/1604.02878.pdf) | Inference | [FP32](face_detection_and_alignment/tensorflow/mtcc/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [DenseNet169](https://arxiv.org/pdf/1608.06993.pdf) | Inference | [FP32](image_recognition/tensorflow/densenet169/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [Inception ResNet V2](https://arxiv.org/pdf/1602.07261.pdf) | Inference | [Int8](image_recognition/tensorflow/inception_resnet_v2/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inception_resnet_v2/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [Inception V3](https://arxiv.org/pdf/1512.00567.pdf) | Inference | [Int8](image_recognition/tensorflow/inceptionv3/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inceptionv3/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [Inception V4](https://arxiv.org/pdf/1602.07261.pdf) | Inference | [Int8](image_recognition/tensorflow/inceptionv4/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inceptionv4/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [MobileNet V1](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [FP32](image_recognition/tensorflow/mobilenet_v1/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [MobileNet V1](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [Int8](image_recognition/tensorflow/mobilenet_v1/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/mobilenet_v1/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [ResNet 101](https://arxiv.org/pdf/1512.03385.pdf) | Inference | [Int8](image_recognition/tensorflow/resnet101/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet101/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [ResNet 50](https://arxiv.org/pdf/1512.03385.pdf) | Inference | [Int8](image_recognition/tensorflow/resnet50/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet50/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [ResNet 50v1.5](https://github.com/tensorflow/models/tree/master/official/resnet) | Inference | [Int8](image_recognition/tensorflow/resnet50v1_5/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet50v1_5/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | [SqueezeNet](https://arxiv.org/pdf/1602.07360.pdf) | Inference | [FP32](image_recognition/tensorflow/squeezenet/README.md#fp32-inference-instructions) |
| Image Segmentation | TensorFlow | [Mask R-CNN](https://arxiv.org/pdf/1703.06870.pdf) | Inference | [FP32](image_segmentation/tensorflow/maskrcnn/README.md#fp32-inference-instructions) |
| Image Segmentation | TensorFlow | [UNet](https://arxiv.org/pdf/1505.04597.pdf) | Inference | [FP32](image_segmentation/tensorflow/unet/README.md#fp32-inference-instructions) |
| Language Modeling | TensorFlow | [LM-1B](https://arxiv.org/pdf/1602.02410.pdf) | Inference | [FP32](language_modeling/tensorflow/lm-1b/README.md#fp32-inference-instructions) |
| Language Translation | TensorFlow | [GNMT](https://arxiv.org/pdf/1609.08144.pdf) | Inference | [FP32](language_translation/tensorflow/gnmt/README.md#fp32-inference-instructions) |
| Language Translation | TensorFlow | [Transformer Language](https://arxiv.org/pdf/1706.03762.pdf)| Inference | [FP32](language_translation/tensorflow/transformer_language/README.md#fp32-inference-instructions) |
| Language Translation | TensorFlow | [Transformer_LT_Official ](https://arxiv.org/pdf/1706.03762.pdf)| Inference | [FP32](language_translation/tensorflow/transformer_lt_official/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | Inference | [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | Inference | [Int8](object_detection/tensorflow/rfcn/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) | Inference | [Int8](object_detection/tensorflow/faster_rcnn/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/faster_rcnn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [FP32](object_detection/tensorflow/ssd-mobilenet/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-ResNet34](https://arxiv.org/pdf/1512.02325.pdf) | Inference | [FP32](object_detection/tensorflow/ssd-resnet34/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [Int8](object_detection/tensorflow/ssd-mobilenet/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/ssd-mobilenet/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-ResNet34](https://arxiv.org/pdf/1512.02325.pdf) | Inference | [Int8](object_detection/tensorflow/ssd-resnet34/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/ssd-resnet34/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-VGG16](https://arxiv.org/pdf/1512.02325.pdf) | Inference | [Int8](object_detection/tensorflow/ssd_vgg16/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/ssd_vgg16/README.md#fp32-inference-instructions) |
| Recommendation | TensorFlow | [NCF](https://arxiv.org/pdf/1708.05031.pdf) | Inference | [FP32](recommendation/tensorflow/ncf/README.md#fp32-inference-instructions) |
| Recommendation | TensorFlow | [Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf) | Inference | [Int8](recommendation/tensorflow/wide_deep_large_ds/README.md#int8-inference-instructions) [FP32](recommendation/tensorflow/wide_deep_large_ds/README.md#fp32-inference-instructions) |
| Recommendation | TensorFlow | [Wide & Deep](https://arxiv.org/pdf/1606.07792.pdf) | Inference | [FP32](recommendation/tensorflow/wide_deep/README.md#fp32-inference-instructions) |
| Text-to-Speech | TensorFlow | [WaveNet](https://arxiv.org/pdf/1609.03499.pdf) | Inference | [FP32](text_to_speech/tensorflow/wavenet/README.md#fp32-inference-instructions) |


## TensorFlow Serving Use Cases


| Use Case | Framework | Model | Mode | Instructions |
| -----------------------| --------------| ------------------- | --------- |------------------------------|
| Image Recognition | TensorFlow Serving | [Inception V3](https://arxiv.org/pdf/1512.00567.pdf) | Inference | [FP32](image_recognition/tensorflow_serving/inceptionv3/README.md#fp32-inference-instructions) |

Loading