Revisiting Anchor Mechanisms for Temporal Action Localization
This repository is the official implementation of Revisiting Anchor Mechanisms for Temporal Action Localization. In this work, we study the weakly supervised temporal action localization task. Most of the current action localization methods follow an anchor-based pipeline: depicting action instances by pre-defined anchors, learning to select the anchors closest to the ground truth, and predicting the confidence of anchors with refinements. Pre-defined anchors set prior about the location and duration for action instances, which facilitates the localization for common action instances but limits the flexibility for tackling action instances with drastic varieties, especially for extremely short or extremely long ones. To address this problem, this paper proposes a novel anchor-free action localization module that assists action localization by temporal points. Specifically, this module represents an action instance as a point with its distances to the starting boundary and ending boundary, alleviating the pre-defined anchor restrictions in terms of action localization and duration. The proposed anchor-free module is capable of predicting the action instances whose duration is either extremely short or extremely long. By combining the proposed anchor-free module with a conventional anchor-based module, we propose a novel action localization framework, called A2Net. The cooperation between anchor-free and anchor-based modules achieves superior performance to the state-of-the-art on THUMOS14 (45.5% vs. 42.8%). Furthermore, comprehensive experiments demonstrate the complementarity between the anchor-free and the anchor-based module, making A2Net simple but effective.
To install requirements:
conda env create -f environment.yaml
Before running the code, please activate this conda environment.
Download Thumos14 from BaiDuYun (code: lele).
Please ensure the data structure is as below
├── data └── thumos ├── val ├── video_validation_0000051.npz ├── video_validation_0000052.npz └── ... └── test ├── video_test_0000004.npz ├── video_test_0000006.npz └── ...
To train the A2Net model on THUMOS14 dataset, please first modify parameters in:
Then run this command:
cd ./tools python main.py
You can download pretrained models here:
- THUMOS14, (code: lele), trained on THUMOS14 using parameters same as "./experiments/A2Net_thumos.yaml".
Our model achieves the following performance on :
If you have any questions, please file an issue or contact Le Yang via email@example.com.