Skip to content
Branch: master
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
inference
README.md
__init__.py

README.md

Mask R-CNN

This document has instructions for how to run Mask R-CNN for the following modes/precisions:

Benchmarking instructions and scripts for model training and inference.

FP32 Inference Instructions

  1. Download the MS COCO 2014 dataset.

  2. Clone the Mask R-CNN model repository. It is used as external model directory for dependencies. Clone the MS COCO API repository in the Mask_RCNN directory that you just cloned. you can get the MS COCO API from the MS COCO API fork with fixes for Python3, or from the original MS COCO API repository and use this pull request for Python3 fixes.

$ git clone https://github.com/matterport/Mask_RCNN.git
$ cd Mask_RCNN

$ git clone https://github.com/waleedka/coco.git
  1. Download pre-trained COCO weights mask_rcnn_coco.h5) from the Mask R-CNN repository release page, and place it in the MaskRCNN directory (from step 2) .
$ wget -q https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5 
  1. Clone this intelai/models repository:
$ git clone https://github.com/IntelAI/models.git

This repository includes launch scripts for running benchmarks and the an optimized version of the Mask R-CNN model code.

  1. Navigate to the benchmarks directory in your local clone of the intelai/models repo from step 4. The launch_benchmark.py script in the benchmarks directory is used for starting a benchmarking run in a optimized TensorFlow docker container. It has arguments to specify which model, framework, mode, precision, and docker image to use, along with your path to the external model directory for --model-source-dir (from step 2) and --data-location (from step 1).

Run benchmarking for throughput and latency with --batch-size=1 :

$ cd /home/<user>/models/benchmarks

$ python launch_benchmark.py \
    --model-source-dir /home/<user>/Mask_RCNN \
    --model-name maskrcnn \
    --framework tensorflow \
    --precision fp32 \
    --mode inference \
    --batch-size 1 \
    --socket-id 0 \
    --data-location /home/<user>/COCO2014 \
    --docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl-py3
  1. Log files are located at the value of --output-dir.

Below is a sample log file tail when running benchmarking for throughput and latency:

Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.23s).
Accumulating evaluation results...
DONE (t=0.14s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.442
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.612
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.483
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.216
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.474
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.621
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.373
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.461
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.473
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.237
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.500
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.654
Batch size: 1
Time spent per BATCH: 609.6943 ms
Total samples/sec: 1.6402 samples/s
Total time:  35.407243490219116
lscpu_path_cmd = command -v lscpu
lscpu located here: b'/usr/bin/lscpu'
Log location outside container: {--output-dir value}/benchmark_maskrcnn_inference_fp32_20190111_205935.log
You can’t perform that action at this time.