Audio-Visual Event Localization in Unconstrained Videos, ECCV 2018
Switch branches/tags
Nothing to show
Clone or download
Latest commit f4b7817 Aug 25, 2018

Audio-Visual Event Localization in Unconstrained Videos (To appear in ECCV 2018)

Project ArXiv

AVE Dataset & Features

AVE dataset can be downloaded from

Audio feature and visual feature (7.7GB) are also released. Please put videos of AVE dataset into /data/AVE folder and features into /data folder before running the code.


Python-3.6, Pytorch-0.3.1, Keras, ffmpeg.

Visualize attention maps

Run: python to generate audio-guided visual attention maps.


Supervised audio-visual event localization


A+V-att model in the paper: python --model_name AV_att

DMRN model in the paper: python --model_name DMRN


python --model_name AV_att --train

Weakly-supervised audio-visual event localization

We add some videos without audio-visual events into training data. Therefore, the labels of these videos are background. Processed visual features can be found in visual_feature_noisy.h5. Put the feature into data folder.


W-A+V-att model in the paper: python


python --train

Cross-modality localization

For this task, we developed a cross-modal matching network. Here, we used visual feature vectors via global average pooling, and you can find here. Please put the feature into data folder. Note that the code was implemented via Keras-2.0 with Tensorflow as the backend.






If you find this work useful, please consider citing it.

  author={Yapeng Tian, Jing Shi, Bochen Li, Zhiyao Duan, and Chenliang Xu},
  title={Audio-Visual Event Localization in Unconstrained Videos},
  booktitle = {ECCV},
  year = {2018}


Audio features are extracted using vggish and the audio-guided visual attention model was implemented highly based on adaptive attention. We thank the authors for sharing their codes. If you use our codes, please also cite their nice works.