Skip to content

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

License

Notifications You must be signed in to change notification settings

ljqiff/mmaction2

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

English | ็ฎ€ไฝ“ไธญๆ–‡

๐Ÿ“„ Table of Contents

๐Ÿฅณ ๐Ÿš€ What's New ๐Ÿ”

The default branch has been switched to main(previous 1.x) from master(current 0.x), and we encourage users to migrate to the latest version with more supported models, stronger pre-training checkpoints and simpler coding. Please refer to Migration Guide for more details.

Release (2023.07.04): v1.1.0 with the following new features:

  • Support CLIP-based multi-modality models: ActionCLIP(Arxiv'2021) and CLIP4clip(ArXiv'2022)
  • Support rich projects: gesture recognition, spatio-temporal action detection tutorial, and knowledge distillation
  • Support HACS-segments dataset(ICCV'2019), MultiSports dataset(ICCV'2021), Kinetics-710 dataset(Arxiv'2022)
  • Support VideoMAE V2(CVPR'2023), and VideoMAE(NeurIPS'2022) on action detection
  • Support TCANet(CVPR'2021)
  • Support Pure Python style Configuration File and downloading datasets by MIM with one command

๐Ÿ“– Introduction ๐Ÿ”

MMAction2 is an open-source toolbox for video understanding based on PyTorch. It is a part of the OpenMMLab project.

Action Recognition on Kinetics-400 (left) and Skeleton-based Action Recognition on NTU-RGB+D-120 (right)


Skeleton-based Spatio-Temporal Action Detection and Action Recognition Results on Kinetics-400


Spatio-Temporal Action Detection Results on AVA-2.1

๐ŸŽ Major Features ๐Ÿ”

  • Modular design: We decompose a video understanding framework into different components. One can easily construct a customized video understanding framework by combining different modules.

  • Support five major video understanding tasks: MMAction2 implements various algorithms for multiple video understanding tasks, including action recognition, action localization, spatio-temporal action detection, skeleton-based action detection and video retrieval.

  • Well tested and documented: We provide detailed documentation and API reference, as well as unit tests.

๐Ÿ› ๏ธ Installation ๐Ÿ”

MMAction2 depends on PyTorch, MMCV, MMEngine, MMDetection (optional) and MMPose (optional).

Please refer to install.md for detailed instructions.

Quick instructions
conda create --name openmmlab python=3.8 -y
conda activate open-mmlab
conda install pytorch torchvision -c pytorch  # This command will automatically install the latest version PyTorch and cudatoolkit, please check whether they match your environment.
pip install -U openmim
mim install mmengine
mim install mmcv
mim install mmdet  # optional
mim install mmpose  # optional
git clone https://github.com/open-mmlab/mmaction2.git
cd mmaction2
pip install -v -e .

๐Ÿ‘€ Model Zoo ๐Ÿ”

Results and models are available in the model zoo.

Supported model
Action Recognition
C3D (CVPR'2014) TSN (ECCV'2016) I3D (CVPR'2017) C2D (CVPR'2018) I3D Non-Local (CVPR'2018)
R(2+1)D (CVPR'2018) TRN (ECCV'2018) TSM (ICCV'2019) TSM Non-Local (ICCV'2019) SlowOnly (ICCV'2019)
SlowFast (ICCV'2019) CSN (ICCV'2019) TIN (AAAI'2020) TPN (CVPR'2020) X3D (CVPR'2020)
MultiModality: Audio (ArXiv'2020) TANet (ArXiv'2020) TimeSformer (ICML'2021) ActionCLIP (ArXiv'2021) VideoSwin (CVPR'2022)
VideoMAE (NeurIPS'2022) MViT V2 (CVPR'2022) UniFormer V1 (ICLR'2022) UniFormer V2 (Arxiv'2022) VideoMAE V2 (CVPR'2023)
Action Localization
BSN (ECCV'2018) BMN (ICCV'2019) TCANet (CVPR'2021)
Spatio-Temporal Action Detection
ACRN (ECCV'2018) SlowOnly+Fast R-CNN (ICCV'2019) SlowFast+Fast R-CNN (ICCV'2019) LFB (CVPR'2019) VideoMAE (NeurIPS'2022)
Skeleton-based Action Recognition
ST-GCN (AAAI'2018) 2s-AGCN (CVPR'2019) PoseC3D (CVPR'2022) STGCN++ (ArXiv'2022) CTRGCN (CVPR'2021)
MSG3D (CVPR'2020)
Video Retrieval
CLIP4Clip (ArXiv'2022)
Supported dataset
Action Recognition
HMDB51 (Homepage) (ICCV'2011) UCF101 (Homepage) (CRCV-IR-12-01) ActivityNet (Homepage) (CVPR'2015) Kinetics-[400/600/700] (Homepage) (CVPR'2017)
SthV1 (ICCV'2017) SthV2 (Homepage) (ICCV'2017) Diving48 (Homepage) (ECCV'2018) Jester (Homepage) (ICCV'2019)
Moments in Time (Homepage) (TPAMI'2019) Multi-Moments in Time (Homepage) (ArXiv'2019) HVU (Homepage) (ECCV'2020) OmniSource (Homepage) (ECCV'2020)
FineGYM (Homepage) (CVPR'2020) Kinetics-710 (Homepage) (Arxiv'2022)
Action Localization
THUMOS14 (Homepage) (THUMOS Challenge 2014) ActivityNet (Homepage) (CVPR'2015) HACS (Homepage) (ICCV'2019)
Spatio-Temporal Action Detection
UCF101-24* (Homepage) (CRCV-IR-12-01) JHMDB* (Homepage) (ICCV'2015) AVA (Homepage) (CVPR'2018) AVA-Kinetics (Homepage) (Arxiv'2020)
MultiSports (Homepage) (ICCV'2021)
Skeleton-based Action Recognition
PoseC3D-FineGYM (Homepage) (ArXiv'2021) PoseC3D-NTURGB+D (Homepage) (ArXiv'2021) PoseC3D-UCF101 (Homepage) (ArXiv'2021) PoseC3D-HMDB51 (Homepage) (ArXiv'2021)
Video Retrieval
MSRVTT (Homepage) (CVPR'2016)

๐Ÿ‘จโ€๐Ÿซ Get Started ๐Ÿ”

For tutorials, we provide the following user guides for basic usage:

Research works built on MMAction2 by users from community
  • Video Swin Transformer. [paper][github]
  • Evidential Deep Learning for Open Set Action Recognition, ICCV 2021 Oral. [paper][github]
  • Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective, ICCV 2021 Oral. [paper][github]

๐ŸŽซ License ๐Ÿ”

This project is released under the Apache 2.0 license.

๐Ÿ–Š๏ธ Citation ๐Ÿ”

If you find this project useful in your research, please consider cite:

@misc{2020mmaction2,
    title={OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark},
    author={MMAction2 Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmaction2}},
    year={2020}
}

๐Ÿ™Œ Contributing ๐Ÿ”

We appreciate all contributions to improve MMAction2. Please refer to CONTRIBUTING.md in MMCV for more details about the contributing guideline.

๐Ÿค Acknowledgement ๐Ÿ”

MMAction2 is an open-source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features and users who give valuable feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their new models.

๐Ÿ—๏ธ Projects in OpenMMLab ๐Ÿ”

  • MMEngine: OpenMMLab foundational library for training deep learning models.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM installs OpenMMLab packages.
  • MMEval: A unified evaluation library for multiple machine learning libraries.
  • MMPreTrain: OpenMMLab pre-training toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMagic: OpenMMLab Advanced, Generative and Intelligent Creation toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMDeploy: OpenMMLab model deployment framework.
  • Playground: A central hub for gathering and showcasing amazing projects built upon OpenMMLab.

About

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 82.1%
  • Jupyter Notebook 16.5%
  • Shell 1.3%
  • Dockerfile 0.1%