Skip to content

A reimplementation of the S2ANet algorithm for Oriented Object Detection

Notifications You must be signed in to change notification settings

chongkuiqi/S2ANet

Repository files navigation

S2ANet

A reimplementation of the S2ANet algorithm for Oriented Object Detection.

Note: Support DDP training and Auto Mixed Precision in Pytorch, so the training is faster!

We release the latest version !

  • To normalize the inputing data, the images are divided by 255.0, instead of using mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], so we don't need to storage the mean and std in inference.
  • In training phase, we don't freeze parameters in the first stage of ResNet, which will add a little more training time.
  • DDP training are more stable.

1. Environment dependency

Our environment: Ubuntu18.04 + pytorch1.7 + cuda10.2
We don't try other environment, but we recommend pytorch>=1.6 for DDP training.

2. Installation

(1)Clone the S2ANet repository

git clone https://github.com/chongkuiqi/S2ANet.git   
cd S2ANet  

(2)Install DOTA_devkit

sudo apt-get install swig  
cd DOTA_devkit/polyiou  
swig -c++ -python csrc/polyiou.i  
python setup.py build_ext --inplace  

(3)Compilate the C++ Cuda Library

cd S2ANet
python setup.py build_ext --inplace

3. Prepare datasets

We take the DOTA dataset for example.

(1)Dataset folder structure

We should download the DOTA dataset, then change the folder structure like so:

your_dir
├── DOTA
│   ├── train
│   │   ├── images
│   │   ├── labelTxt
│   ├── val
│   │   ├── images
│   │   ├── labelTxt

(2) Split images and and convert annotations format

The original images in DOTA have so large size, so we need to split them into chip images, like this:
note: the DOTA path and save path in 1_prepare_dota1_ms.py should be changed.

cd S2ANet/DOTA_devkit/
python 1_prepare_dota1_ms.py

Then convert the DOTA annotations to yolo format.
note: the DOTA_split path in 2_convert_dota_to_yolo.py should be changed.

python 2_convert_dota_to_yolo.py

Besides, val_split.txt is needed for evulate the model, this file records image names without extension. note: the path in 3_create_txts.py should be changed.

python 3_create_txt.py

(3) Config dota.yaml

The dota.yaml is also needed.
note: the path in dota.yaml should be changed.

Finally, the folder structure will be like this:

your_dir
├── DOTA
│   ├── train
│   │   ├── images
│   │   ├── labelTxt
│   ├── val
│   │   ├── images
│   │   ├── labelTxt
├── DOTA_split
│   ├── train
│   │   ├── empty_images
│   │   ├── empty_labels
│   │   ├── images
│   │   ├── labels
│   │   ├── labelTxt
│   ├── val
│   │   ├── images
│   │   ├── labels
│   │   ├── labelTxt
│   │   ├── val_split.txt

4. Train S2ANet model

Most of the time, there is no need to modify the learning rate when setting different batch-size.

(1) Single-GPU

cd S2ANet
python train.py

(2) Mutil-GPU

Note: We support pytorch Multi-GPU DistributedDataParallel Mode !

python -m torch.distributed.launch --nproc_per_node 2 train.py --device 0,1 --batch-size 16

If you get RuntimeError: Address already in use, use a different port number by adding --master_port like below,

python -m torch.distributed.launch --master_port 1234 --nproc_per_node 2 ...

5. Results and trained weights on DOTA dataset

Note: We only use the DOTA train set, and report the mAP50 on DOTA val set.

Model Backbone train mAP50 Download
S2ANet (paper) R-50-FPN train+val set 74.04(test set) ------
S2ANet (paper) R-50-FPN train set 70.2(val set) ------
S2ANet (this impl.) R-50-FPN train set 70.2(val set) model
S2ANet (latest) R-50-FPN train set 70.7(val set) model

6.Refenerce

About

A reimplementation of the S2ANet algorithm for Oriented Object Detection

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published