Skip to content

miquelmarti/Okutama-Action

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
js
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Abstract

We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.

Publications

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

Download

  • Sample (one 4K video and labels): sample

We offer the dataset in two different formats:

In addition, we provide trained models in Caffe: models

About

The creation of this dataset was supported by Prendinger Lab at the National Institute of Informatics, Tokyo, Japan.