Skip to content

MVIG-SJTU/AlphaPose

 
 

Repository files navigation

Notice

This branch is developed on PyTorch 0.4.0. We have released a new version of AlphaPose based on PyTorch 1.1+. Please checkout our master branch for more details.

News!

  • Dec 2019: v0.3.0 version of AlphaPose is released! Smaller model, higher accuracy!
  • Apr 2019: MXNet version of AlphaPose is released! It runs at 23 fps on COCO validation set.
  • Feb 2019: CrowdPose is integrated into AlphaPose Now!
  • Dec 2018: General version of PoseFlow is released! 3X Faster and support pose tracking results visualization!
  • Sep 2018: v0.2.0 version of AlphaPose is released! It runs at 20 fps on COCO validation set (4.6 people per image on average) and achieves 71 mAP!

AlphaPose

Alpha Pose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (72.3 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

AlphaPose supports both Linux and Windows!

Installation

Windows Version please check out doc/win_install.md

  1. Get the code.
git clone -b pytorch https://github.com/MVIG-SJTU/AlphaPose.git
  1. Install pytorch 0.4.0 and other dependencies.
pip install -r requirements.txt
  1. Download the models manually: duc_se.pth (2018/08/30) (Google Drive | Baidu pan), yolov3-spp.weights(Google Drive | Baidu pan). Place them into ./models/sppe and ./models/yolo respectively.

Quick Start

  • Input dir: Run AlphaPose for all images in a folder with:
python3 demo.py --indir ${img_directory} --outdir examples/res 
  • Video: Run AlphaPose for a video and save the rendered video with:
python3 video_demo.py --video ${path to video} --outdir examples/res --save_video
  • Webcam: Run AlphaPose using webcam and visualize the results with:
python3 webcam_demo.py --webcam 0 --outdir examples/res --vis
  • Input list: Run AlphaPose for images in a list and save the rendered images with:
python3 demo.py --list examples/list-coco-demo.txt --indir ${img_directory} --outdir examples/res --save_img
  • Note: If you meet OOM(out of memory) problem, decreasing the pose estimation batch until the program can run on your computer:
python3 demo.py --indir ${img_directory} --outdir examples/res --posebatch 30
  • Getting more accurate: You can enable flip testing to get more accurate results by disable fast_inference, e.g.:
python3 demo.py --indir ${img_directory} --outdir examples/res --fast_inference False
  • Speeding up: Checkout the speed_up.md for more details.
  • Output format: Checkout the output.md for more details.
  • For more: Checkout the run.md for more options

Pose Tracking

Please read PoseFlow/README.md for details.

CrowdPose

Please read doc/CrowdPose.md for details.

FAQ

Check out faq.md for faq.

Contributors

Pytorch version of AlphaPose is developed and maintained by Jiefeng Li, Hao-Shu Fang, Yuliang Xiu and Cewu Lu.

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{fang2017rmpe,
  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
  booktitle={ICCV},
  year={2017}
}

@inproceedings{xiu2018poseflow,
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  booktitle={BMVC},
  year = {2018}
}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.