Skip to content
FCOS: Fully Convolutional One-Stage Object Detection
Branch: master
Clone or download
tianzhi0549 Merge pull request #5 from ausk/master
Create inference_single_cvimage.py to inference on the single image.
Latest commit aed00a3 Apr 17, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
configs fixed a bug Apr 12, 2019
demo Update inference_single_cvimage.py Apr 16, 2019
docker Add tqdm package in Dockerfile (#638) Apr 4, 2019
maskrcnn_benchmark Fix typo: tpye() --> type() Apr 12, 2019
tests Support Binary Mask with transparent SementationMask interface (#473) Apr 9, 2019
tools box_only = False when using FCOS Apr 12, 2019
.flake8 Initial release Oct 24, 2018
.gitignore Add support for Python 2 (#11) Oct 26, 2018
ABSTRACTIONS.md fix maskrnn typo (#154) Nov 13, 2018
CODE_OF_CONDUCT.md Initial release Oct 24, 2018
CONTRIBUTING.md Initial release Oct 24, 2018
INSTALL.md add FCOS Apr 12, 2019
LICENSE add FCOS Apr 12, 2019
MASKRCNN_README.md add FCOS Apr 12, 2019
MODEL_ZOO.md Fbnet benchmark (#507) Mar 7, 2019
README.md
TROUBLESHOOTING.md Add note about cudapopcallerror (#375) Jan 23, 2019
requirements.txt Update the installation process. (#502) Feb 28, 2019
setup.py add the option to use a `FORCE_CUDA` to force cuda installation on do… Mar 31, 2019

README.md

FCOS: Fully Convolutional One-Stage Object Detection

This project hosts the code for implementing the FCOS algorithm for object detection, as presented in our paper:

FCOS: Fully Convolutional One-Stage Object Detection,
Tian, Zhi, Chunhua Shen, Hao Chen, and Tong He,
arXiv preprint arXiv:1904.01355 (2019).

The full paper is available at: https://arxiv.org/abs/1904.01355.

Highlights

  • Totally anchor-free: FCOS completely avoids the complicated computation related to anchor boxes and all hyper-parameters of anchor boxes.
  • Memory-efficient: FCOS uses 2x less training memory footprint than its anchor-based counterpart RetinaNet.
  • Better performance: Compared to RetinaNet, FCOS achieves better performance under exactly the same training and testing settings.
  • State-of-the-art performance: Without bells and whistles, FCOS achieves state-of-the-art performances. It achieves 41.0% (ResNet-101-FPN) and 42.1% (ResNeXt-32x8d-101) in AP on coco test-dev.
  • Faster: FCOS enjoys faster training and inference speed than RetinaNet.

Required hardware

We use 8 Nvidia V100 GPUs.
But 4 1080Ti GPUs can also train a fully-fledged ResNet-50-FPN based FCOS since FCOS is memory-efficient.

Installation

This FCOS implementation is based on maskrcnn-benchmark. Therefore the installation is the same as original maskrcnn-benchmark.

Please check INSTALL.md for installation instructions. You may also want to see the original README.md of maskrcnn-benchmark.

Inference

The inference command line on coco minival split:

python tools/test_net.py \
    --config-file configs/fcos/fcos_R_50_FPN_1x.yaml \
    MODEL.WEIGHT models/FCOS_R_50_FPN_1x.pth \
    TEST.IMS_PER_BATCH 4    

Please note that:

  1. If your model's name is different, please replace models/FCOS_R_50_FPN_1x.pth with your own.
  2. If you enounter out-of-memory error, please try to reduce TEST.IMS_PER_BATCH to 1.
  3. If you want to evaluate a different model, please change --config-file to its config file (in configs/fcos) and MODEL.WEIGHT to its weights file.

For your convenience, we provide the following trained models (more models are coming soon).

Model Total training mem (GB) Multi-scale training Testing time / im AP (minival) AP (test-dev) Link
FCOS_R_50_FPN_1x 29.3 No 71ms 36.6 37.0 download
FCOS_R_101_FPN_2x 44.1 Yes 74ms 40.9 41.0 download
FCOS_X_101_32x8d_FPN_2x 72.9 Yes 122ms 42.0 42.1 download

[1] 1x means the model is trained for 90K iterations.
[2] 2x means the model is trained for 180K iterations.
[3] We report total training memory footprint on all GPUs instead of the memory footprint per GPU as in maskrcnn-benchmark.

Training

The following command line will train FCOS_R_50_FPN_1x on 8 GPUs with Synchronous Stochastic Gradient Descent (SGD):

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port=$((RANDOM + 10000)) \
    tools/train_net.py \
    --skip-test \
    --config-file configs/fcos/fcos_R_50_FPN_1x.yaml \
    DATALOADER.NUM_WORKERS 2 \
    OUTPUT_DIR training_dir/fcos_R_50_FPN_1x

Note that:

  1. If you want to use fewer GPUs, please reduce --nproc_per_node. The total batch size does not depends on nproc_per_node. If you want to change the total batch size, please change SOLVER.IMS_PER_BATCH in configs/fcos/fcos_R_50_FPN_1x.yaml.
  2. The models will be saved into OUTPUT_DIR.
  3. If you want to train FCOS with other backbones, please change --config-file.
  4. Sometimes you may encounter a deadlock with 100% GPUs' usage, which might be a problem of NCCL. Please try export NCCL_P2P_DISABLE=1 before running the training command line.

Contributing to the project

Any pull requests or issues are weclome.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.

@article{tian2019fcos,
  title={FCOS: Fully Convolutional One-Stage Object Detection},
  author={Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
  journal={arXiv preprint arXiv:1904.01355},
  year={2019}
}

License

For academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact the authors.

You can’t perform that action at this time.