Skip to content

ModelZoo Status (tag=v1.10.0)

chudegao edited this page Mar 17, 2022 · 1 revision

Report generated at 2022-03-17T18:43:35Z via GitHub Actions (1999254040).

Environment

Package Version
Platform Linux-5.11.0-1028-azure-x86_64-with-glibc2.2.5
Python 3.8.12 (default, Oct 18 2021, 14:07:50) [GCC 9.3.0]
onnx 1.10.2
onnx-tf 1.10.0 (e2a8a71)
tensorflow 2.8.0

Summary

Value Count
Models 39
Total 154
βœ”οΈ Passed 117
⚠️ Limitation 15
❌ Failed 22
βž– Skipped 0

Details

1. bert-squad

text/machine_comprehension/bert-squad

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. bertsquad-10 416M 5 10 πŸ†— πŸ†— ['Items are not equal:', " ACTUAL: dtype('float32')", " DESIRED: dtype('int64')"]
❌ 2. bertsquad-12-int8 119M 7 12 πŸ†— ValueError: 'onnx_tf_prefix_bert/embeddings/MatMul:0_output_quantized_cast' is not a valid root scope name. A root scope name has to match the following pattern: ^[A-Za-z0-9.][A-Za-z0-9_.\/>-]*$ βž–
❌ 3. bertsquad-12 416M 7 12 πŸ†— πŸ†— ['Not equal to tolerance rtol=0.001, atol=0.001', '', 'Mismatched elements: 185 / 256 (72.3%)', 'Max absolute difference: 0.09927511', 'Max relative difference: 0.01622382']
❌ 4. bertsquad-8 416M 5 8 πŸ†— πŸ†— ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

2. bidirectional_attention_flow

text/machine_comprehension/bidirectional_attention_flow

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. bidaf-9 42M 4 9 πŸ†— BackendIsNotSupposedToImplementIt: CategoryMapper is not implemented. βž–

3. gpt-2

text/machine_comprehension/gpt-2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. gpt2-10 523M 6 10 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. gpt2-lm-head-10 634M 6 10 πŸ†— πŸ†— πŸ†—

4. gpt2-bs

text/machine_comprehension/gpt2-bs

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. gpt2-lm-head-bs-12 634M 7 12 πŸ†— TypeError: Expected any non-tensor type, but got a tensor instead. βž–

5. roberta

text/machine_comprehension/roberta

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. roberta-base-11 476M 6 11 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. roberta-sequence-classification-9 476M 6 9 πŸ†— πŸ†— πŸ†—

6. t5

text/machine_comprehension/t5

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. t5-decoder-with-lm-head-12 620M 6 12 πŸ†— πŸ†— No test data provided in model zoo
⚠️ 2. t5-encoder-12 620M 6 12 πŸ†— πŸ†— No test data provided in model zoo

7. age_gender

vision/body_analysis/age_gender

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. age_googlenet 23M 6 11 πŸ†— πŸ†— No test data provided in model zoo
⚠️ 2. gender_googlenet 23M 6 11 πŸ†— πŸ†— No test data provided in model zoo
⚠️ 3. vgg_ilsvrc_16_age_chalearn_iccv2015 514M 6 11 πŸ†— πŸ†— No test data provided in model zoo
⚠️ 4. vgg_ilsvrc_16_age_imdb_wiki 514M 6 11 πŸ†— πŸ†— No test data provided in model zoo
⚠️ 5. vgg_ilsvrc_16_gender_imdb_wiki 512M 6 11 πŸ†— πŸ†— No test data provided in model zoo

8. arcface

vision/body_analysis/arcface

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. arcfaceresnet100-8 249M 3 8 πŸ†— πŸ†— πŸ†—

9. emotion_ferplus

vision/body_analysis/emotion_ferplus

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. emotion-ferplus-2 33M 3 2 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. emotion-ferplus-7 33M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. emotion-ferplus-8 33M 3 8 πŸ†— πŸ†— πŸ†—

10. ultraface

vision/body_analysis/ultraface

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. version-RFB-320 1M 4 9 πŸ†— πŸ†— No test data provided in model zoo
⚠️ 2. version-RFB-640 2M 4 9 πŸ†— πŸ†— No test data provided in model zoo

11. alexnet

vision/classification/alexnet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. bvlcalexnet-12-int8 58M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. bvlcalexnet-12 233M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. bvlcalexnet-3 233M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. bvlcalexnet-6 233M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. bvlcalexnet-7 233M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. bvlcalexnet-8 233M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. bvlcalexnet-9 233M 3 9 πŸ†— πŸ†— πŸ†—

12. caffenet

vision/classification/caffenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. caffenet-12-int8 58M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. caffenet-12 233M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. caffenet-3 233M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. caffenet-6 233M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. caffenet-7 233M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. caffenet-8 233M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. caffenet-9 233M 3 9 πŸ†— πŸ†— πŸ†—

13. densenet-121

vision/classification/densenet-121

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. densenet-3 31M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. densenet-6 31M 3 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. densenet-7 31M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. densenet-8 31M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. densenet-9 31M 3 9 πŸ†— πŸ†— πŸ†—

14. efficientnet-lite4

vision/classification/efficientnet-lite4

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. efficientnet-lite4-11-int8 13M 6 11 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. efficientnet-lite4-11 50M 6 11 πŸ†— πŸ†— πŸ†—

15. googlenet

vision/classification/inception_and_googlenet/googlenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. googlenet-12-int8 7M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. googlenet-12 27M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. googlenet-3 27M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. googlenet-6 27M 3 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. googlenet-7 27M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. googlenet-8 27M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. googlenet-9 27M 3 9 πŸ†— πŸ†— πŸ†—

16. inception_v1

vision/classification/inception_and_googlenet/inception_v1

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. inception-v1-12-int8 10M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. inception-v1-12 27M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. inception-v1-3 27M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. inception-v1-6 27M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. inception-v1-7 27M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. inception-v1-8 27M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. inception-v1-9 27M 3 9 πŸ†— πŸ†— πŸ†—

17. inception_v2

vision/classification/inception_and_googlenet/inception_v2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. inception-v2-3 43M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. inception-v2-6 43M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. inception-v2-7 43M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. inception-v2-8 43M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. inception-v2-9 43M 3 9 πŸ†— πŸ†— πŸ†—

18. mnist

vision/classification/mnist

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. mnist-1 27K 3 1 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. mnist-7 26K 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. mnist-8 26K 3 8 πŸ†— πŸ†— πŸ†—

19. mobilenet

vision/classification/mobilenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. mobilenetv2-7 13M 6 10 πŸ†— πŸ†— πŸ†—

20. rcnn_ilsvrc13

vision/classification/rcnn_ilsvrc13

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. rcnn-ilsvrc13-3 220M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. rcnn-ilsvrc13-6 220M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. rcnn-ilsvrc13-7 220M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. rcnn-ilsvrc13-8 220M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. rcnn-ilsvrc13-9 220M 3 9 πŸ†— πŸ†— πŸ†—

21. resnet

vision/classification/resnet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. resnet101-v1-7 171M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. resnet101-v2-7 170M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. resnet152-v1-7 231M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. resnet152-v2-7 230M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. resnet18-v1-7 45M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. resnet18-v2-7 45M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. resnet34-v1-7 83M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 8. resnet34-v2-7 83M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 9. resnet50-caffe2-v1-3 98M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 10. resnet50-caffe2-v1-6 98M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 11. resnet50-caffe2-v1-7 98M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 12. resnet50-caffe2-v1-8 98M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 13. resnet50-caffe2-v1-9 98M 3 9 πŸ†— πŸ†— πŸ†—
❌ 14. resnet50-v1-12-int8 21M 4 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 15. resnet50-v1-12 92M 4 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 16. resnet50-v1-7 98M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 17. resnet50-v2-7 98M 3 7 πŸ†— πŸ†— πŸ†—

22. shufflenet

vision/classification/shufflenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. shufflenet-3 5M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. shufflenet-6 5M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. shufflenet-7 5M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. shufflenet-8 5M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. shufflenet-9 5M 3 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. shufflenet-v2-10 9M 6 10 πŸ†— πŸ†— πŸ†—
❌ 7. shufflenet-v2-12-int8 2M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 8. shufflenet-v2-12 9M 7 12 πŸ†— πŸ†— πŸ†—

23. squeezenet

vision/classification/squeezenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. squeezenet1 1M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. squeezenet1 5M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. squeezenet1 5M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. squeezenet1 5M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. squeezenet1 5M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. squeezenet1 5M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. squeezenet1 5M 3 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 8. squeezenet1 5M 3 7 πŸ†— πŸ†— πŸ†—

24. vgg

vision/classification/vgg

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. vgg16-12-int8 132M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. vgg16-12 528M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. vgg16-7 528M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. vgg16-bn-7 528M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. vgg19-7 548M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. vgg19-bn-7 548M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. vgg19-caffe2-3 548M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 8. vgg19-caffe2-6 548M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 9. vgg19-caffe2-7 548M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 10. vgg19-caffe2-8 548M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 11. vgg19-caffe2-9 548M 3 9 πŸ†— πŸ†— πŸ†—

25. zfnet-512

vision/classification/zfnet-512

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. zfnet512-12-int8 83M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 2. zfnet512-12 333M 7 12 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. zfnet512-3 333M 3 3 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. zfnet512-6 333M 3 6 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. zfnet512-7 333M 3 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. zfnet512-8 333M 3 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. zfnet512-9 333M 3 9 πŸ†— πŸ†— πŸ†—

26. duc

vision/object_detection_segmentation/duc

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. ResNet101-DUC-7 249M 3 7 πŸ†— πŸ†— ['Not equal to tolerance rtol=0.001, atol=0.001', '', 'Mismatched elements: 529669 / 3040000 (17.4%)', 'Max absolute difference: 0.99922323', 'Max relative difference: 1.']

27. faster-rcnn

vision/object_detection_segmentation/faster-rcnn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. FasterRCNN-10 160M 4 10 πŸ†— πŸ†— ['Not equal to tolerance rtol=0.001, atol=0.001', '', '(shapes (80, 4), (77, 4) mismatch)', ' x: array([[ 588.1974 , 206.06781, 743.93445, 259.08737],', ' [ 273.3301 , 432.5252 , 327.75058, 490.80524],']

28. fcn

vision/object_detection_segmentation/fcn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. fcn-resnet101-11 207M 6 11 πŸ†— OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. βž–
⚠️ 2. fcn-resnet50-11 135M 6 11 πŸ†— OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. βž–
❌ 3. fcn-resnet50-12-int8 34M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
⚠️ 4. fcn-resnet50-12 135M 7 12 πŸ†— OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. βž–

29. mask-rcnn

vision/object_detection_segmentation/mask-rcnn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. MaskRCNN-10 170M 4 10 πŸ†— πŸ†— ['Not equal to tolerance rtol=0.001, atol=0.001', '', '(shapes (63, 4), (62, 4) mismatch)', ' x: array([[ 475.39844 , 286.16537 , 803.5265 , 553.59546 ],', ' [ 873.8628 , 402.8742 , 909.85596 , 428.6966 ],']

30. retinanet

vision/object_detection_segmentation/retinanet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. retinanet-9 218M 6 9 πŸ†— πŸ†— πŸ†—

31. ssd

vision/object_detection_segmentation/ssd

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. ssd-10 77M 4 10 πŸ†— πŸ†— πŸ†—
❌ 2. ssd-12-int8 20M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 3. ssd-12 77M 7 12 πŸ†— πŸ†— πŸ†—

32. ssd-mobilenetv1

vision/object_detection_segmentation/ssd-mobilenetv1

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
❌ 1. ssd_mobilenet_v1_10 28M 5 10 πŸ†— πŸ†— FileNotFoundError: [Errno 2] No such file or directory: 'wiki/test_model_and_data/ssd_mobilenet_v1/test_data_set_1/input_0.pb'
❌ 2. ssd_mobilenet_v1_12-int8 9M 7 12 πŸ†— ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. βž–
βœ”οΈ 3. ssd_mobilenet_v1_12 28M 7 12 πŸ†— πŸ†— πŸ†—

33. tiny-yolov2

vision/object_detection_segmentation/tiny-yolov2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. tinyyolov2-7 61M 5 7 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. tinyyolov2-8 61M 5 8 πŸ†— πŸ†— πŸ†—

34. tiny-yolov3

vision/object_detection_segmentation/tiny-yolov3

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. tiny-yolov3-11 34M 6 11 πŸ†— πŸ†— πŸ†—

35. yolov2-coco

vision/object_detection_segmentation/yolov2-coco

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. yolov2-coco-9 195M 4 9 πŸ†— πŸ†— No test data provided in model zoo

36. yolov3

vision/object_detection_segmentation/yolov3

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. yolov3-10 236M 5 10 πŸ†— πŸ†— πŸ†—

37. yolov4

vision/object_detection_segmentation/yolov4

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
⚠️ 1. yolov4 246M 6 11 πŸ†— OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow. βž–

38. fast_neural_style

vision/style_transfer/fast_neural_style

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. candy-8 6M 4 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 2. candy-9 6M 4 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 3. mosaic-8 6M 4 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 4. mosaic-9 6M 4 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 5. pointilism-8 6M 4 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 6. pointilism-9 6M 4 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 7. rain-princess-8 6M 4 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 8. rain-princess-9 6M 4 9 πŸ†— πŸ†— πŸ†—
βœ”οΈ 9. udnie-8 6M 4 8 πŸ†— πŸ†— πŸ†—
βœ”οΈ 10. udnie-9 6M 4 9 πŸ†— πŸ†— πŸ†—

39. sub_pixel_cnn_2016

vision/super_resolution/sub_pixel_cnn_2016

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran
βœ”οΈ 1. super-resolution-10 234K 4 10 πŸ†— πŸ†— πŸ†—