Audio-Visual Event Localization in Unconstrained Videos (To appear in ECCV 2018)
AVE dataset can be downloaded from https://drive.google.com/open?id=1FjKwe79e0u96vdjIVwfRQ1V6SoDHe7kK.
Audio feature and visual feature (7.7GB) are also released. Please put videos of AVE dataset into /data/AVE folder and features into /data folder before running the code.
Scripts for generating audio and visual features: https://drive.google.com/file/d/1TJL3cIpZsPHGVAdMgyr43u_vlsxcghKY/view?usp=sharing (Feel free to modify and use it to precess your audio and visual data).
Python-3.6, Pytorch-0.3.0, Keras, ffmpeg.
Run: python attention_visualization.py to generate audio-guided visual attention maps.
Testing:
A+V-att model in the paper: python supervised_main.py --model_name AV_att
DMRN model in the paper: python supervised_main.py --model_name DMRN
Training:
python supervised_main.py --model_name AV_att --train
We add some videos without audio-visual events into training data. Therefore, the labels of these videos are background. Processed visual features can be found in visual_feature_noisy.h5. Put the feature into data folder.
Testing:
W-A+V-att model in the paper: python weak_supervised_main.py
Training:
python weak_supervised_main.py --train
For this task, we developed a cross-modal matching network. Here, we used visual feature vectors via global average pooling, and you can find here. Please put the feature into data folder. Note that the code was implemented via Keras-2.0 with Tensorflow as the backend.
Testing:
python cmm_test.py
Training:
python cmm_train.py
[1] Rouditchenko, Andrew, et al. "Self-supervised Audio-visual Co-segmentation." ICASSP, 2019. [Paper]
[2] Lin, Yan-Bo, Yu-Jhe Li, and Yu-Chiang Frank Wang. "Dual-modality seq2seq network for audio-visual event localization." ICASSP, 2019 [Paper]
[3] Rana, Aakanksha, Cagri Ozcinar, and Aljosa Smolic. "Towards Generating Ambisonics Using Audio-visual Cue for Virtual Reality." ICASSP, 2019. [Paper]
[4] Yu Wu, Linchao Zhu, Yan Yan, Yi Yang. "Dual Attention Matching for Audio-Visual Event Localization", ICCV, 2019. (oral) [website]
If you find this work useful, please consider citing it.
@InProceedings{tian2018ave,
author={Yapeng Tian, Jing Shi, Bochen Li, Zhiyao Duan, and Chenliang Xu},
title={Audio-Visual Event Localization in Unconstrained Videos},
booktitle = {ECCV},
year = {2018}
}
Audio features are extracted using vggish and the audio-guided visual attention model was implemented highly based on adaptive attention. We thank the authors for sharing their codes. If you use our codes, please also cite their nice works.