AlphaVideo is an open-sourced video understanding toolbox based on PyTorch covering multi-object tracking and action detection. In AlphaVideo, we released the first one-stage multi-object tracking (MOT) system TubeTK that can achieve 66.9 MOTA on MOT-16 dataset and 63 MOTA on MOT-17 dataset. For action detection, we released an efficient model AlphAction, which is the first open-source project that achieves 30+ mAP (32.4 mAP) with single model on AVA dataset.
Run this command:
pip install alphavideo
Clone repository from github:
git clone https://github.com/Alpha-Video/AlphaVideo.git alphaVideo
cd alphaVideo
Setup and install AlphaVideo:
pip install .
-
For this task, we provide the TubeTK model which is the official implementation of paper "TubeTK: Adopting Tubes to Track Multi-Object in a One-Step Training Model (CVPR2020, oral)." Detailed training and testing script on MOT-Challenge datasets can be found here.
- Accurate end-to-end multi-object tracking.
- Do not need any ready-made image-level object deteaction models.
- Pre-trained model for pedestrian tracking.
- Input: Frame list; video.
- Output: Videos decorated by colored bounding-box; Btube lists.
- For details usages, see our docs.
-
For this task, we provide the AlphAction model as an implementation of paper "Asynchronous Interaction Aggregation for Action Detection". This paper is recently accepted by ECCV 2020!
@inproceedings{pang2020tubeTK,
title={TubeTK: Adopting Tubes to Track Multi-Object in a One-Step Training Model},
author={Pang, Bo and Li, Yizhuo and Zhang, Yifan and Li, Muchen and Lu, Cewu}
booktitle={CVPR},
year={2020}
}
@inproceedings{tang2020asynchronous,
title={Asynchronous Interaction Aggregation for Action Detection},
author={Tang, Jiajun and Xia, Jin and Mu, Xinzhi and Pang, Bo and Lu, Cewu},
booktitle={Proceedings of the European conference on computer vision (ECCV)},
year={2020}
}
This project is open-sourced and maintained by Machine Vision and Intelligence Group (MVIG) in Shanghai Jiao Tong University.