Detection and instance segmentation on MS COCO 2017 is implemented based on MMDetection.
Model | Latency | Ckpt | Log | ||||||
---|---|---|---|---|---|---|---|---|---|
RepNeXt-M3 | 40.8 | 62.4 | 44.7 | 37.8 | 59.5 | 40.6 | 5.1ms | M3 | M3 |
RepNeXt-M4 | 42.9 | 64.4 | 47.2 | 39.1 | 61.7 | 41.7 | 6.6ms | M4 | M4 |
RepNeXt-M5 | 44.7 | 66.0 | 49.2 | 40.7 | 63.5 | 43.6 | 10.4ms | M5 | M5 |
Install mmcv-full and MMDetection v2.28.2, Later versions should work as well. The easiest way is to install via MIM
pip install -U openmim
mim install mmcv-full==1.7.1
mim install mmdet==2.28.2
Prepare COCO 2017 dataset according to the instructions in MMDetection. The dataset should be organized as
detection
├── data
│ ├── coco
│ │ ├── annotations
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
We provide a multi-GPU testing script, specify config file, checkpoint, and number of GPUs to use:
./dist_test.sh config_file path/to/checkpoint #GPUs --eval bbox segm
For example, to test RepNeXt-M3 on COCO 2017 on an 8-GPU machine,
./dist_test.sh configs/mask_rcnn_repnext_m3_fpn_1x_coco.py path/to/repnext_m3_coco.pth 8 --eval bbox segm
Download ImageNet-1K pretrained weights into ./pretrain
We provide PyTorch distributed data parallel (DDP) training script dist_train.sh
, for example, to train RepNeXt-M3 on an 8-GPU machine:
./dist_train.sh configs/mask_rcnn_renext_m3_fpn_1x_coco.py 8
Tips: specify configs and #GPUs!
AttributeError: 'MMDistributedDataParallel' object has no attribute '_use_replicated_tensor_module'
Solution: edit /home/someone/micromamba/envs/detection/lib/python3.8/site-packages/mmcv/parallel/distributed.py
line 160 in _run_ddp_forward
function.
# comment below two lines
# module_to_run = self._replicated_tensor_module if \
# self._use_replicated_tensor_module else self.module
# replace with below line
module_to_run = self.module
AttributeError: 'int' object has no attribute 'type'
Solution: edit /home/someone/micromamba/envs/detection/lib/python3.8/site-packages/mmcv/parallel/_functions.py
line 75 in forward
function.
# comment below line
# streams = [_get_stream(device) for device in target_gpus]
# replace with below line
streams = [_get_stream(torch.device("cuda", device)) for device in target_gpus]