Keras implementation of MaskRNN instance aware segmentation as described in Mask R-CNN by Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick, using RetinaNet as base.
This repository doesn't strictly implement MaskRCNN as described in their paper. The difference is that their paper describes using a RPN to propose ROIs and to use those ROIs to perform bounding box regression, classification and mask estimation simultaneously. Instead, this repository uses RetinaNet to do the bounding box regression and classification and builds a mask estimation head on top of those predictions.
In theory RetinaNet can be configured to act as a RPN network, which would then be identical to MaskRCNN, but doing so would require more layers and complexity than is actually necessary. Less is more :)
- Clone this repository.
- Install keras-retinanet (
pip install keras-retinanet --user). Make sure
tensorflow v1.13.1is installed and is using the GPU.
- Optionally, install
pycocotoolsif you want to train / test on the MS COCO dataset by running
pip install --user git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI.
pip install keras-maskrcnn --userto install the latest release, or run
pip install . --userin the repository to install that specific version..
An example of testing the network can be seen in this Notebook. In general, inference of the network works as follows:
outputs = model.predict_on_batch(inputs) boxes = outputs[-4] scores = outputs[-3] labels = outputs[-2] masks = outputs[-1]
boxes are shaped
(None, None, 4) (for
(x1, y1, x2, y2)), scores is shaped
(None, None) (classification score), labels is shaped
(None, None) (label corresponding to the score) and masks is shaped
(None, None, 28, 28). In all three outputs, the first dimension represents the shape and the second dimension indexes the list of detections.
Loading models can be done in the following manner:
from keras_maskrcnn.models import load_model model = load_model('/path/to/model.h5', backbone_name='resnet50')
Execution time on NVIDIA Pascal Titan X is roughly 175msec for an image of shape
Example output images using
keras-maskrcnn are shown below.
keras-maskrcnn can be trained using this script.
Note that the train script uses relative imports since it is inside the
If you want to adjust the script for your own use outside of this repository,
you will need to switch it to use absolute imports.
For training on MS COCO, run:
# Running directly from the repository: ./keras_maskrcnn/bin/train.py coco /path/to/MS/COCO # Using the installed script: maskrcn-train coco /path/to/MS/COCO
The pretrained MS COCO model can be downloaded here. Results using the
cocoapi are shown below (note: the closest resembling architecture in the MaskRCNN paper achieves an mAP of 0.336).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.278 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.488 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.286 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.127 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.312 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.392 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.251 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.386 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.405 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.219 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.452 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.565
For training on a [custom dataset], a CSV file can be used as a way to pass the data. See below for more details on the format of these CSV files. To train using your CSV, run:
# Running directly from the repository: ./keras_maskrcnn/bin/train.py csv /path/to/csv/file/containing/annotations /path/to/csv/file/containing/classes # Using the installed script: maskrcnn-train csv /path/to/csv/file/containing/annotations /path/to/csv/file/containing/classes
CSVGenerator provides an easy way to define your own datasets.
It uses two CSV files: one file containing annotations and one file containing a class name to ID mapping.
The CSV file with annotations should contain one annotation per line. Images with multiple bounding boxes should use one row per bounding box. Note that indexing for pixel values starts at 0. The expected format of each line is:
Some images may not contain any labeled objects.
To add these images to the dataset as negative examples,
add an annotation where
mask are all empty:
A full example:
/data/imgs/img_001.jpg,837,346,981,456,cow,/data/masks/img_001_001.png /data/imgs/img_002.jpg,215,312,279,391,cat,/data/masks/img_002_001.png /data/imgs/img_002.jpg,22,5,89,84,bird,/data/masks/img_002_002.png /data/imgs/img_003.jpg,,,,,,
This defines a dataset with 3 images.
img_001.jpg contains a cow.
img_002.jpg contains a cat and a bird.
img_003.jpg contains no interesting objects/animals.
Class mapping format
The class name to ID mapping file should contain one mapping per line. Each line should use the following format:
Indexing for classes starts at 0. Do not include a background class as it is implicit.
cow,0 cat,1 bird,2
Feel free to join the
#keras-maskrcnn Keras Slack channel for discussions and questions.