Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
js
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Abstract

We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.

Publications

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

Download

Find the training set (with labels) of Okutama-Action in the following link. Find the test set in the following link.

About

The creation of this dataset was supported by Prendinger Lab at the National Institute of Informatics, Tokyo, Japan.

About

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

Topics

Resources

Releases

No releases published

Packages

No packages published
You can’t perform that action at this time.