Skip to content

The AVA dataset densely annotates 80 atomic visual actions in 351k movie clips with actions localized in space and time, resulting in 1.65M action labels with multiple labels per human occurring frequently.

Notifications You must be signed in to change notification settings

cvdfoundation/ava-dataset

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 

Repository files navigation

AVA Actions Dataset

The AVA dataset densely annotates 80 atomic visual actions in 430 movie clips with actions localized in space and time, resulting in 1.62M action labels with multiple labels per human occurring frequently. Clips are drawn from 15-minute contiguous segments of movies, to open the door for temporal reasoning about activities. The dataset is split into 235 videos for training, 64 videos for validation, and 131 videos for test. This page aims to provide the download instructions and mirror sites for AVA Dataset. Please visit the project page for more details on the dataset.

Download Videos

CVDF hosts the videos in the AVA dataset. Please download the videos with the url patterns:

https://s3.amazonaws.com/ava-dataset/trainval/[file_name]
https://s3.amazonaws.com/ava-dataset/test/[file_name]

You can download the list of training/validation file names here, and the test filenames here.

Download Annotations

The public annotations, for the training and validation sets, can be downloaded here ava_v2.2.zip.

AVA ActiveSpeaker Dataset

AVA ActiveSpeaker associates speaking activity with a visible face, on the AVA v1.0 videos, resulting in 3.65 million frames labeled across ~39K face tracks. A detailed description of this dataset is in the arXiv paper.

Download Videos

CVDF hosts the videos in AVA ActiveSpeaker. Please download the videos with the url patterns:

https://s3.amazonaws.com/ava-dataset/trainval/[file_name]

You can download the list of file names here.

Download Annotations

The public annotations for AVA ActiveSpeaker can be downloaded here ava_activespeaker_train_v1.0.tar.bz2 and ava_activespeaker_val_v1.0.tar.bz2.

AVA Speech Dataset

The AVA-Speech dataset densely annotates speech activity for the movie clips in the AVA v1.0 dataset. It explicitly labels 3 background noise conditions (Clean Speech, Speech with background Music, and Speech with background Noise), resulting in ~40K labeled segments spanning 40 hours of data. Please visit the project page for more details on the dataset.

Download Videos

CVDF hosts the videos in AVA Speech. Please download the videos with the url patterns:

https://s3.amazonaws.com/ava-dataset/trainval/[file_name]

You can download the list of file names here.

Download Annotations

The public annotations for AVA speech can be downloaded here ava_speech_labels_v1.csv.

About

The AVA dataset densely annotates 80 atomic visual actions in 351k movie clips with actions localized in space and time, resulting in 1.65M action labels with multiple labels per human occurring frequently.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published