Skip to content
/ aLRPLoss Public
forked from kemaloksuz/aLRPLoss

Official PyTorch Implementation of aLRP Loss [NeurIPS2020]

License

Notifications You must be signed in to change notification settings

CV-IP/aLRPLoss

 
 

Repository files navigation

aLRP Loss: A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection

The official implementation of aLRP Loss. Our implementation is based on mmdetection. You can also find a different implementation based on official AP Loss repository in this link.

aLRP Loss: A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection,
Kemal Oksuz, Baris Can Cam, Emre Akbas, Sinan Kalkan, NeurIPS 2020. (arXiv pre-print)

Summary

Average Localisation-Recall-Precision (aLRP) Loss is a ranking-based loss function to train object detectors by unifying localisation and classification branches. We define aLRP Loss as the average Localisation Recall Precision [1] errors on positive examples. To tackle the nondifferentiable nature of ranking during backpropagation, we combine error-driven update of perceptron learning with backpropogation by generalizing the training approach of AP Loss [2] to ranking-based loss functions (see Section 4 in the paper for details).

With this formulation, aLRP Loss (i) enforces the predictions with large confidence scores to have better localisation, and correlates the classification and localisation tasks (see Figure below), (ii) has significantly less number of hyperparameters (i.e. only 1 hyperparameter) than the conventional loss formulation (i.e. the combination of classification and regression losses by a scaler weight), and (iii) guarantees balanced training (see Theorem 2 in the paper).

aLRP Toy Example

How to Cite

Please cite the paper if you benefit from our paper or repository:

@inproceedings{aLRPLoss,
       title = {A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection},
       author = {Kemal Oksuz and Baris Can Cam and Emre Akbas and Sinan Kalkan},
       booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
       year = {2020}
}

RetinaNet Results

Method Backbone Scale AP (test-dev) AP (minival) Model Log
AP Loss* ResNet-50 500 35.7 35.4 model log
aLRP Loss (GIoU)* ResNet-50 500 39.5 39.0 model log
aLRP Loss (GIoU+ATSS) ResNet-50 500 41.3 41.0 model log
aLRP Loss (GIoU+ATSS) ResNet-101 500 42.8 42.2 model log
aLRP Loss (GIoU+ATSS) ResNext-101-64x4d 500 44.6 44.5 model log
aLRP Loss (GIoU+ATSS) ResNet-101 800 45.9 45.4 model log
aLRP Loss (GIoU+ATSS) ResNext-101-64x4d 800 47.8 47.2 model log
aLRP Loss (GIoU+ATSS) ResNext-101-64x4d-DCN 800 48.9 48.6 model log

*Following the learning rate scheduling adopted by AP Loss[2], these models are trained for 100 epochs by decreasing learning rate in 60th and 80th epochs. The rest of the models are trained for 100 epochs by scheduling the learning rate in 75th and 95th epochs.

FoveaBox Results

Method Backbone AP (minival) oLRP (minival) Model Log
Focal Loss+Smooth L1 ResNet-50 38.3 68.8 model log
AP Loss+Smooth L1 ResNet-50 36.5 69.8 model log
aLRP Loss ResNet-50 39.7 67.2 model log

Faster R-CNN Results

Method Backbone AP (minival) oLRP (minival) Model Log
Cross Entropy+Smooth L1 ResNet-50 37.8 69.3 model log
Cross Entropy+GIoU Loss ResNet-50 38.2 69.0 model log
aLRP Loss ResNet-50 40.7 66.7 model log

Specification of Dependencies and Preparation

  • Please see requirements.txt and requirements folder for the rest of the dependencies.
  • Please refer to install.md for installation instructions of MMDetection.
  • Please see getting_started.md for dataset preparation and the basic usage of MMDetection.

Training Code

The configuration files of all models listed above can be found in the configs/alrp_loss folder. You can follow getting_started.md for training code. As an example, to train aLRP Loss (GIoU+ATSS) on 4 GPUs as we did, use the following command:

./tools/dist_train.sh configs/alrp_loss/alrp_loss_retinanet_r50_fpn_ATSS_100e_coco500.py 4

Test Code

The configuration files of all models listed above can be found in the configs/alrp_loss folder. You can follow getting_started.md for test code. As an example, to test aLRP Loss (GIoU+ATSS), first download or train a model, then use the following command to test on multiple GPUs:

./tools/dist_test.sh configs/alrp_loss/alrp_loss_retinanet_r50_fpn_ATSS_100e_coco500.py -PATH-TO-TRAINED-MODEL 4 --eval bbox

You can also test a model on a single GPU with the following example command:

python tools/test.py configs/alrp_loss/alrp_loss_retinanet_r50_fpn_ATSS_100e_coco500.py -PATH-TO-TRAINED-MODEL 4 --eval bbox

License

Following MMDetection, this project is released under the Apache 2.0 license.

References

[1] Oksuz K, Cam BC, Akbas E, Kalkan S, Localization recall precision (LRP): A new performance metric for object detection, ECCV 2018.
[2] Chen K, Li J, Lin W, See J, Wang J, Duan L, Chen Z, He C, Zou J, Towards Accurate One-Stage Object Detection With AP-Loss, CVPR 2019 & TPAMI.

Contact

This repo is maintained by Kemal Oksuz and Baris Can Cam.

About

Official PyTorch Implementation of aLRP Loss [NeurIPS2020]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.0%
  • Cuda 6.0%
  • C++ 4.9%
  • Other 0.1%