Recently, I publish a better CenterNet, called CenterNet-plus. Although my CenterNet-plus has a very simple piepline with any DCN layers, it surpasses the official CenterNet.
You can get my CenterNet-plus from the following project:
https://github.com/yjh0410/CenterNet-plus
A PyTorch version of CenterNet(objects as points). I only support resnet18 version. No DLA or Hourglass version.
I have trained it on VOC0712 and COCO 2017. You can download them from BaiDuYunDisk:
Link:https://pan.baidu.com/s/170OYftGRVW-j5qAKYyHSQQ
Password:jz4q
The official CenterNet takes advantage of DCN while I just replace it with SPP used in YOLOv3 as I'm a little lazy ~
On VOC:
data | mAP | |
(official) resnet18 + DCN | VOC2007 | 75.7 |
(Our) resnet18 + SPP | VOC2007 | 75.6 |
On COCO:
data | AP | AP50 | |
(official) resnet18 + DCN | COCO val | 28 | 44.9 |
(Our) resnet18 + SPP | COCO val | 25.8 | 45.4 |
I'm still trying something new to make my CenterNet-Lite stronger.
- Pytorch-gpu 1.1.0/1.2.0/1.3.0
- Tensorboard 1.14.
- opencv-python, python3.6/3.7
I copy the download files from the following excellent project: https://github.com/amdegroot/ssd.pytorch
I have uploaded the VOC2007 and VOC2012 to BaiDuYunDisk, so for researchers in China, you can download them from BaiDuYunDisk:
Link:https://pan.baidu.com/s/1tYPGCYGyC0wjpC97H-zzMQ
Password:4la9
You will get a VOCdevkit.zip
, then what you need to do is just to unzip it and put it into data/
. After that, the whole path to VOC dataset is data/VOCdevkit/VOC2007
and data/VOCdevkit/VOC2012
.
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>
I copy the download files from the following excellent project: https://github.com/DeNA/PyTorch_YOLOv3
Just run sh data/scripts/COCO2017.sh
. You will get COCO train2017, val2017, test2017.
python train.py --cuda -d voc
You can run python train.py -h
to check all optional argument.
python train.py --cuda -d coco
python test.py --cuda -d voc --trained_model [ Please input the path to model dir. ]
python test.py --cuda -d coco-val --trained_model [ Please input the path to model dir. ]
python eval.py --cuda -d voc --train_model [ Please input the path to model dir. ]
To run on COCO_val:
python eval.py --cuda -d coco-val --train_model [ Please input the path to model dir. ]
To run on COCO_test-dev(You must be sure that you have downloaded test2017):
python eval.py --cuda -d coco-test --train_model [ Please input the path to model dir. ]
You will get a .json file which can be evaluated on COCO test server.