New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Project] Add Actionclip Project #2470
Conversation
Codecov ReportPatch and project coverage have no change.
Additional details and impacted files@@ Coverage Diff @@
## dev-1.x #2470 +/- ##
========================================
Coverage 76.96% 76.96%
========================================
Files 159 159
Lines 12598 12598
Branches 2116 2116
========================================
Hits 9696 9696
Misses 2393 2393
Partials 509 509
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
ActionCLIP Project
ActionCLIP: A New Paradigm for Video Action Recognition
Abstract
The canonical approach to video action recognition dictates a neural model to do a classic and standard 1-of-N majority vote task. They are trained to predict a fixed set of predefined categories, limiting their transferable ability on new datasets with unseen concepts. In this paper, we provide a new perspective on action recognition by attaching importance to the semantic information of label texts rather than simply mapping them into numbers. Specifically, we model this task as a video-text matching problem within a multimodal learning framework, which strengthens the video representation with more semantic language supervision and enables our model to do zero-shot action recognition without any further labeled data or parameters requirements. Moreover, to handle the deficiency of label texts and make use of tremendous web data, we propose a new paradigm based on this multimodal learning framework for action recognition, which we dub "pre-train, prompt and fine-tune". This paradigm first learns powerful representations from pre-training on a large amount of web image-text or video-text data. Then it makes the action recognition task to act more like pre-training problems via prompt engineering. Finally, it end-to-end fine-tunes on target datasets to obtain strong performance. We give an instantiation of the new paradigm, ActionCLIP, which not only has superior and flexible zero-shot/few-shot transfer ability but also reaches a top performance on general action recognition task, achieving 83.8% top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone.
Usage
Setup Environment
Please refer to Installation to install MMAction2.
Assume that you are located at
$MMACTION2/projects/actionclip
.Add the current folder to
PYTHONPATH
, so that Python can find your code. Run the following command in the current directory to add it.Data Preparation
Prepare the Kinetics400 dataset according to the instruction.
Create a symbolic link from
$MMACTION2/data
to./data
in the current directory, so that Python can locate your data. Run the following command in the current directory to create the symbolic link.Testing commands
To test with single GPU:
To test with multiple GPUs:
To test with multiple GPUs by slurm:
Results
Kinetics400
[1] The models are ported from the repo ActionCLIP and tested on our data. Currently, we only support the testing of ActionCLIP models. Due to the variation in testing data, our reported test accuracy differs from that of the original repository (on average, it is lower by one point). Please refer to this issue for more details.
Zero-Shot Prediction
We offer two methods for zero-shot prediction as follows. The
test.mp4
can be downloaded from here.Citation