Skip to content


Repository files navigation


Keras implementation of Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
cloned from


  • supporting inception_resnet_v2
    • for use inception_resnet_v2 in keras.application as feature extractor, create new inception_resnet_v2 model file using transfer/
    • if use original inception_resnet_v2 model as feature extractor, you can't load weight parameter on faster-rcnn


  • Both theano and tensorflow backends are supported. However compile times are very high in theano, and tensorflow is highly recommended.

  • can be used to train a model. To train on Pascal VOC data, simply do: python -p /path/to/pascalvoc/.

  • the Pascal VOC data set (images and annotations for bounding boxes around the classified objects) can be obtained from:

  • provides an alternative way to input data, using a text file. Simply provide a text file, with each line containing:


    For example:



    The classes will be inferred from the file. To use the simple parser instead of the default pascal voc style parser, use the command line option -o simple. For example python -o simple -p my_data.txt.

  • Running will write weights to disk to an hdf5 file, as well as all the setting of the training run to a pickle file. These settings can then be loaded by for any testing.

  • can be used to perform inference, given pretrained weights and a config file. Specify a path to the folder containing images: python -p /path/to/test_data/

  • Data augmentation can be applied by specifying --hf for horizontal flips, --vf for vertical flips and --rot for 90 degree rotations


  • contains all settings for the train or test run. The default settings match those in the original Faster-RCNN paper. The anchor box sizes are [128, 256, 512] and the ratios are [1:1, 1:2, 2:1].
  • The theano backend by default uses a 7x7 pooling region, instead of 14x14 as in the frcnn paper. This cuts down compiling time slightly.
  • The tensorflow backend performs a resize on the pooling region, instead of max pooling. This is much more efficient and has little impact on results.

Example output:

ex1 ex2 ex3 ex4


  • If you get this error: ValueError: There is a negative shape in the graph!
    than update keras to the newest version

  • Make sure to use python2, not python3. If you get this error: TypeError: unorderable types: dict() < dict() you are using python3

  • If you run out of memory, try reducing the number of ROIs that are processed simultaneously. Try passing a lower -n to Alternatively, try reducing the image size from the default value of 600 (this setting is found in


[1] Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, 2015
[2] Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, 2016