Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Support the Training of ActionClip #2620

Merged
merged 7 commits into from Oct 12, 2023

Conversation

Dai-Wenxun
Copy link
Collaborator

@Dai-Wenxun Dai-Wenxun commented Aug 1, 2023

ActionCLIP Project

ActionCLIP: A New Paradigm for Video Action Recognition

Abstract

The canonical approach to video action recognition dictates a neural model to do a classic and standard 1-of-N majority vote task. They are trained to predict a fixed set of predefined categories, limiting their transferable ability on new datasets with unseen concepts. In this paper, we provide a new perspective on action recognition by attaching importance to the semantic information of label texts rather than simply mapping them into numbers. Specifically, we model this task as a video-text matching problem within a multimodal learning framework, which strengthens the video representation with more semantic language supervision and enables our model to do zero-shot action recognition without any further labeled data or parameters requirements. Moreover, to handle the deficiency of label texts and make use of tremendous web data, we propose a new paradigm based on this multimodal learning framework for action recognition, which we dub "pre-train, prompt and fine-tune". This paradigm first learns powerful representations from pre-training on a large amount of web image-text or video-text data. Then it makes the action recognition task to act more like pre-training problems via prompt engineering. Finally, it end-to-end fine-tunes on target datasets to obtain strong performance. We give an instantiation of the new paradigm, ActionCLIP, which not only has superior and flexible zero-shot/few-shot transfer ability but also reaches a top performance on general action recognition task, achieving 83.8% top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone.

Usage

Setup Environment

Please refer to Installation to install MMAction2. Run the following command to install clip.

pip install git+https://github.com/openai/CLIP.git

Assume that you are located at $MMACTION2/projects/actionclip.

Add the current folder to PYTHONPATH, so that Python can find your code. Run the following command in the current directory to add it.

Please run it every time after you opened a new shell.

export PYTHONPATH=`pwd`:$PYTHONPATH

Data Preparation

Prepare the Kinetics400 dataset according to the instruction.

Create a symbolic link from $MMACTION2/data to ./data in the current directory, so that Python can locate your data. Run the following command in the current directory to create the symbolic link.

ln -s ../../data ./data

Training commands

To train with single GPU:

mim train mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py

To train with multiple GPUs:

mim train mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py --launcher pytorch --gpus 8

To train with multiple GPUs by slurm:

mim train mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py --launcher slurm \
    --gpus 8 --gpus-per-node 8 --partition $PARTITION

Testing commands

To test with single GPU:

mim test mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py --checkpoint $CHECKPOINT

To test with multiple GPUs:

mim test mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py --checkpoint $CHECKPOINT --launcher pytorch --gpus 8

To test with multiple GPUs by slurm:

mim test mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py --checkpoint $CHECKPOINT --launcher slurm \
    --gpus 8 --gpus-per-node 8 --partition $PARTITION

Results

Kinetics400

frame sampling strategy backbone top1 acc top5 acc testing protocol config ckpt
1x1x8 ViT-B/32 77.6 93.8 8 clips x 1 crop config ckpt[1]
1x1x8 ViT-B/16 80.3 95.2 8 clips x 1 crop config ckpt[1]
1x1x16 ViT-B/16 81.1 95.6 16 clips x 1 crop config ckpt[1]
1x1x32 ViT-B/16 81.3 95.8 32 clips x 1 crop config ckpt[1]

[1] The models are ported from the repo ActionCLIP and tested on our data. Currently, we only support the testing of ActionCLIP models. Due to the variation in testing data, our reported test accuracy differs from that of the original repository (on average, it is lower by one point). Please refer to this issue for more details.

Kinetics400 (Trained on Our K400 dataset)

frame sampling strategy gpus backbone top1 acc top5 acc testing protocol config ckpt log
1x1x8 8 ViT-B/32 77.5 93.2 8 clips x 1 crop config ckpt log
1x1x8 8 ViT-B/16 81.3 95.2 8 clips x 1 crop config ckpt log

Zero-Shot Prediction

We offer two methods for zero-shot prediction as follows. The test.mp4 can be downloaded from here.

Using Naive Pytorch

import torch
import clip
from models.load import init_actionclip
from mmaction.utils import register_all_modules

register_all_modules(True)

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = init_actionclip('ViT-B/32-8', device=device)

video_anno = dict(filename='test.mp4', start_index=0)
video = preprocess(video_anno).unsqueeze(0).to(device)

template = 'The woman is {}'
labels = ['singing', 'dancing', 'performing']
text = clip.tokenize([template.format(label) for label in labels]).to(device)

with torch.no_grad():
    video_features = model.encode_video(video)
    text_features = model.encode_text(text)

video_features /= video_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
similarity = (100 * video_features @ text_features.T).softmax(dim=-1)
probs = similarity.cpu().numpy()

print("Label probs:", probs)  # [[9.995e-01 5.364e-07 6.666e-04]]

Using MMAction2 APIs

import mmengine
from mmaction.utils import register_all_modules
from mmaction.apis import inference_recognizer, init_recognizer

register_all_modules(True)

config_path = 'configs/actionclip_vit-base-p32-res224-clip-pre_1x1x8_k400-rgb.py'
checkpoint_path = 'https://download.openmmlab.com/mmaction/v1.0/projects/actionclip/actionclip_vit-base-p32-res224-clip-pre_1x1x8_k400-rgb/vit-b-32-8f.pth'
template = 'The woman is {}'
labels = ['singing', 'dancing', 'performing']

# Update the labels, the default is the label list of K400.
config = mmengine.Config.fromfile(config_path)
config.model.labels_or_label_file = labels
config.model.template = template

device = "cuda" if torch.cuda.is_available() else "cpu"
model = init_recognizer(config=config, checkpoint=checkpoint_path, device=device)

pred_result = inference_recognizer(model, 'test.mp4')
probs = pred_result.pred_scores.item.cpu().numpy()
print("Label probs:", probs)  # [9.995e-01 5.364e-07 6.666e-04]

Citation

@article{wang2021actionclip,
  title={Actionclip: A new paradigm for video action recognition},
  author={Wang, Mengmeng and Xing, Jiazheng and Liu, Yong},
  journal={arXiv preprint arXiv:2109.08472},
  year={2021}
}

@codecov
Copy link

codecov bot commented Aug 1, 2023

Codecov Report

Patch and project coverage have no change.

Comparison is base (c548788) 76.18% compared to head (60e2c6d) 76.18%.

❗ Current head 60e2c6d differs from pull request most recent head d482c91. Consider uploading reports for the commit d482c91 to get more accurate results

Additional details and impacted files
@@           Coverage Diff            @@
##           dev-1.x    #2620   +/-   ##
========================================
  Coverage    76.18%   76.18%           
========================================
  Files          170      170           
  Lines        13792    13792           
  Branches      2361     2361           
========================================
  Hits         10507    10507           
  Misses        2718     2718           
  Partials       567      567           
Flag Coverage Δ
unittests 76.18% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@hukkai hukkai self-requested a review October 11, 2023 05:06
@hukkai
Copy link
Collaborator

hukkai commented Oct 11, 2023

@ly015

@ly015 ly015 merged commit 17b88a3 into open-mmlab:dev-1.x Oct 12, 2023
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants