PoseFlow: Efficient Online Pose Tracking (BMVC'18)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
deepmatching init Apr 14, 2018
.gitignore PoseFfow(Orb Version) Oct 2, 2018
README.md PoseFlow(orb version) Oct 2, 2018
alpha-pose-results-sample.json init Apr 14, 2018
matching.py PoseFlow(orb version) Oct 2, 2018
posetrack.gif edit readme Apr 15, 2018
posetrack_data init Apr 14, 2018
poseval init Apr 14, 2018
requirements.txt PoseFlow(orb version) Oct 2, 2018
tracker.py PoseFlow(orb version) Oct 2, 2018
utils.py PoseFLow(Orb Version) Oct 2, 2018

README.md

Pose Flow

Official implementation of Pose Flow: Efficient Online Pose Tracking .

Results on PoseTrack Challenge validation set:

  1. Task2: Multi-Person Pose Estimation (mAP)
Method Head mAP Shoulder mAP Elbow mAP Wrist mAP Hip mAP Knee mAP Ankle mAP Total mAP
Detect-and-Track(FAIR) 67.5 70.2 62 51.7 60.7 58.7 49.8 60.6
AlphaPose 66.7 73.3 68.3 61.1 67.5 67.0 61.3 66.5
  1. Task3: Pose Tracking (MOTA)
Method Head MOTA Shoulder MOTA Elbow MOTA Wrist MOTA Hip MOTA Knee MOTA Ankle MOTA Total MOTA Total MOTP Speed(FPS)
Detect-and-Track(FAIR) 61.7 65.5 57.3 45.7 54.3 53.1 45.7 55.2 61.5 Unknown
PoseFlow(DeepMatch) 59.8 67.0 59.8 51.6 60.0 58.4 50.5 58.3 67.8 8
PoseFlow(OrbMatch) 59.0 66.8 60.0 51.8 59.4 58.4 50.3 58.0 62.2 24

Requirements

  • Python 2.7.13
  • OpenCV 3.4.2.16
  • OpenCV-contrib 3.4.2.16
  • tqdm 4.19.8

Installation

  1. Download PoseTrack Dataset from PoseTrack to AlphaPose/PoseFlow/posetrack_data/
  2. (Optional) Use DeepMatching to extract dense correspondences between adjcent frames in every video, please refer to DeepMatching Compile Error to compile DeepMatching correctly
pip install -r requirements.txt

# Generate correspondences by DeepMatching
# (More Robust but Slower)
cd deepmatching
make clean all
make
cd ..
python matching.py --orb=0 

# Generate correspondences by Orb
# (Faster but Less Robust)
python matching.py --orb=1 

Quick Start

Firstly, using AlphaPose to generate multi-person pose estimation results on videos, please see alpha-pose-results-sample.json to know json format.

Run pose tracking

python tracker.py --dataset=val/test  --orb=1/0

Evaluation

Original poseval has some instructions on how to convert annotation files from MAT to JSON.

Evaluate pose tracking results on validation dataset:

git clone https://github.com/leonid-pishchulin/poseval.git --recursive
cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH
cd ../../
python poseval/py/evaluate.py --groundTruth=./posetrack_data/annotations/val \
                    --predictions=./${track_result_dir}/ \
                    --evalPoseTracking --evalPoseEstimation

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{xiu2018poseflow,
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  booktitle={BMVC},
  year = {2018}
}