Skip to content
forked from danxuhk/CMT-CNN

Learning Cross-Modal Deep Representations for Robust Pedestrian Detection

License

Notifications You must be signed in to change notification settings

TaihuLight/CMT-CNN

 
 

Repository files navigation

Learning Cross-Modal Deep Representations for Robust Pedestrian Detection

By Dan Xu, Wanli Ouyang, Elisa Ricci, Xiaogang Wang and Nicu Sebe

Introduction

CMT-CNN is a pedestrian detection approach asscoiated to an arxiv submission https://arxiv.org/abs/1704.02431 which is accepted at CVPR 2017. The code is implemented with Caffe and has been tested under the configurations of Ubuntu 14.04, MATLAB 2015b and CUDA 8.0.

Cite CMT-CNN

Please consider citing our paper if the code is helpful in your research work:

@inproceedings{xu2017learning,
  title={Learning Cross-Modal Deep Representations for Robust Pedestrian Detection},
  author={Xu, Dan and Ouyang, Wanli and Ricci, Elisa and Wang, Xiaogang and Sebe, Nicu},
  journal={CVPR},
  year={2017}
}

Requirements

Please first download and install this modified caffe version for CMT-CNN, and test by downloading the trained model and network definition file from Google Drive.

About

Learning Cross-Modal Deep Representations for Robust Pedestrian Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 80.8%
  • Python 8.8%
  • Cuda 5.1%
  • CMake 3.1%
  • MATLAB 1.1%
  • Makefile 0.7%
  • Shell 0.4%