Skip to content

NuayHL/CrowdDetection

Repository files navigation

A One-stage Crowd ObjectDetection

val_1_dt

YOLO for CrowdDetection

Environment Setting

  • Base Enviroment
pip install -r requirements.txt
git clone https://github.com/NuayHL/odcore.git

Simple Inferencing

python infer.py --ckpt-file YOLOv3_640.pth --conf-file cfgs/yolov3 --img imgs/val_1.png
  • Replace the image name to infer other images

Model Training

CrowdDetection
      |--CrowdHuman
            |--Images_train
            |      |-- All training images
            |--Images_val    
            |      |-- All val images
            |--annotation_train.odgt
            |--annotation_val.odgt
  • Run utility/tran_CrowdHuman.py at root path
  • Run command
python train.py --conf-file cfgs/yolox --batch-size 32 --workers 8 --device 0 --eval-interval 10 --save-interval 50

Model Evaluation

*CrowdHuman dataset required

python eval.py --conf-file cfgs/yolo_v4 --ckpt-file best_epoch.pth --workers 8 --batch-size 32 --type mip --force-eval

Replace the --ckpt-file and its correspond config yaml file for custom evaluation

If you have complete exp dir under the running_dir (exmple: running_log/rYOLOv4), you can start eval the .pth file (example: best_epoch.pth) with eval.sh

./eval.sh rYOLOv4 best_epoch mip

Evaluation type

Loss visualization

python drawloss.py --loss-log exp_loss.log
  • Replace the --loss-log to other _loss.log files to visualize them

Copyright© NuayHL 2022-2023. All right reserved

About

Pedestrian Detection via one-stage

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published