Baseline Code for ADHA dataset
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
Hybrid_Fusion
PBLSTM
Two_Stream
images
pose
README.md

README.md

Hybrid-model-for-human-action-adverb-recognition

By Bo Pang, Kaiwen Zha, Cewu Lu.

Introduction

ADHA is the first human action adverb recognition dataset. This hybrid model is the baseline of this dataset. The model is a fusion of two-stream model, pose-based LSTM model and expression model. The expression information is acting as a feature that combined into the CNN feature of the PBLSTM and Two-Stream model. The framework of the model is like this:

RMPE Framework

Usage

  1. Get the code.
git clone https://github.com/BoPang1996/Hybrid-model-for-human-action-adverb-recognition.git
cd Hybrid-model-for-human-action-adverb-recognition
  1. Get the dataset: You can download the ADHA dataset from here

  2. PBLSTM:

  • Get the pose info using Open Pose. The output is skeleton videos.
  • Use ./pose/extract.py to get the input of the PBLSTM model.
  • ./PBLSTM/train.py & ./PBLSTM/test.py to train and output the result of the model.
  1. Two-Stream model
  • Use ./Two_Stream/get_input_data/get_optical_flow to get the optical flow of the raw video.
  • Use ./Two_Stream/get_input_data/gettrackingdata.py to get the input of the two stream video. The output has two folder: "of" and "rgb".("of" folder for motion stream and "rgb" folder for spatial stream)
  • Use ./Two-Stream/motion/train.py and ./Two-Stream/spatial/train.py to train the model and use ./Two-Stream/Fusion/test.py to output the result.
  1. Expression
  • Use this hybrid model to get the expression result of the video. This model is the winner of EmotiW2016. The result is saved as txt file.
  • To combine the expression feature into the above two models, set the parameter "withexpression" to "True" in the train.py and test.py and set the parameter "expression_path" to the expression result folder.
  • Retrain the models.
  1. Fusion to get the final result
  • Run ./Hybrid_Fusion/Fusion.py to get the final reuslt of the hybrid model.

Citation

Please cite the paper in your publications if it helps your research:

@inproceedings{pang2018adha,
  title={Human Action Adverb Recognition: ADHA Dataset and A Hybrid Model},
  author={Bo, Pang and Zha, Kaiwen and Lu, Cewu},
  booktitle={ArXiv preprint},
  year={2018}
}

Acknowledgements

Thanks to OpenPose and Hybrid expression model.