Skip to content

MuchHair/GGNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code for our CVPR 2021 paper Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection

Getting Started

Installation

pytorch=0.4.1 torchvision=0.2.1

git clone https://github.com/SherlockHolmes221/GGNet.git
cd GGNet
pip install -r requirements.txt
cd src/lib/models/networks/DCNv2
./make.sh

Training and Test

Dataset Preparation

  1. HICO-DET Organize them in Dataset folder as follows:

    |-- Dataset/
    |   |-- <hico-det>/
    |       |-- images
                |-- test2015
                |-- train2015
    |       |-- annotations
    

    The annotations is provided here

  2. V-COCO Organize them in Dataset folder as follows:

    |-- Dataset/
    |   |-- <verbcoco>/
    |       |-- images
                |-- val2014
                |-- train2014
    |       |-- annotations
    

    The annotations is provided here

  3. Download the pre-trained models trained on COCO object detection dataset provided by CenterNet.Hourglass104). Put them into the models folder.

Training and Testing

sh experiments/hico/hoidet_hico_hourglass.sh 
sh experiments/vcoco/hoidet_vcoco_hourglass.sh 

Evalution

python src/lib/eval/hico_eval_de_ko.py --exp hoidet_hico_ggnet 
python src/lib/eval/vcoco_eval.py --exp hoidet_vcoco_ggnet 

Results on HICO-DET and V-COCO

Our Results on HICO-DET dataset

Model Full (def) Rare (def) None-Rare (def) Full (ko) Rare (ko) None-Rare (ko) FPS Download
hourglass104 23.47 16.48 25.60 27.36 20.23 29.48 9 model

Our Results on V-COCO dataset

Model AProle Download
hourglass104 54.7 model

Citation

@inproceedings{zhong2021glance,
  title={Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection},
  author={Zhong, Xubin and Qu, Xian and Ding, Changxing and Tao, Dacheng},
  booktitle={CVPR},
  year={2021}
}

Acknowledge

PPDM

About

CVPR2021 Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 64.2%
  • Cuda 14.4%
  • C 11.8%
  • C++ 9.0%
  • Shell 0.6%