Official PyTorch implementation of "Compactness Driven Co-learning for Crowd Counting and Localization".
conda create -n cclm python=3.10 cmake=3.16.3
conda activate cclmpip install -r scripts/requirements.txtPlease follow FIDTM to process the datasets into COCO format.
The expected directory structure is:
data/
├── part_A_final/
│ ├── train_data/
│ ├── test_data/
│ ├── train_data_annotation.json
│ └── test_data_annotation.json
├── part_B_final/
├── NWPU/
└── FDST/
Edit configs/fidt_ucl.json:
- Change
Dataset.train.nameto the target dataset:sta_crowd,stb_crowd,nwpu_crowd, orfdst_crowd - Update
ann_fileandimg_prefixto your actual dataset paths
For single-node multi-GPU training (4 GPUs):
export CUDA_VISIBLE_DEVICES=0,1,2,3
torchrun --nproc_per_node=4 --nnodes=1 --master_port 12350 train_counter.py --config configs/fidt_ucl.jsonBefore evaluation, compile the Cython NMS module:
python setup.py build_ext --inplaceexport CUDA_VISIBLE_DEVICES=0
python inference_nwpu.py --config configs/fidt_ucl.json --ckpt path/to/your/best.pthReplace path/to/your/best.pth with the actual path to your trained checkpoint.