Skip to content

Latest commit

 

History

History
155 lines (104 loc) · 7.24 KB

README.md

File metadata and controls

155 lines (104 loc) · 7.24 KB

SkyEye dataset

Dataset for analyzing lane-less traffic behavior at intersections

The SkyEye dataset is the first aerial dataset for monitoring intersections with mixed traffic and lane-less behavior. Around 1 hour of video each from 4 intersections, namely, Paldi (P), Nehru bridge - Ashram road (N), Swami Vivekananda bridge - Ashram road (V), and APMC market (A) in the city of Ahmedabad, India.

Paldi (P) Nehru Bridge Ashram Road (N)
4-way signalized intersection 4-way signalized intersection
Swami Vivekananda bridge - Ashram road (V) APMC market (A)
7-way signalized intersection 3-way unsignalized intersection

These intersections were considered because of the diverse traffic conditions they present.

The videos were captured using a DJI Phantom 4 Pro drone at 50 frames per second in 4K resolution (4096x2160).

Annotation

There are 50,000 frames in total with 4,021 distinct road user tracks are annotated. A detailed breakdown is below:

Number of unique road users

Intersection car bus motorbike auto-rickshaw truck van pedestrains
P 175 54 881 494 45 16 226
V 132 9 627 195 7 0 9
N 41 8 275 99 12 6 33
A 73 6 402 135 43 0 81
Total 421 77 2185 971 107 22 349

Downloads

The Skyeye dataset is available as images with bounding box annotations for road user localization and type detection or videos with tracks extracted from every road user for road-user tracking. Additionally, we also provide labeled collision prone tracks

Road user localization and type detection

  • Dataset consists of 49,652 images in 4096x2160 here or sliced into 198,485 sliced images in 1920x1080 here
  • Annotations for 4096x2160 images (in Pascal VOC XML format) here
  • Annotations for 1920x1080 images (in CSV format) here

Road-user Tracking

  • Dataset consists of 5 videos that can be downloaded here
  • Annotations (in MOT format) that can be downloaded here

frame_number, object_id, top_left_x, top_left_y, width, height, road_user_type

Road User type Name
1 car
2 bus
3 motorbike (includes all two-wheelers)
4 autorickshaw
5 truck
6 van
7 pedestrian

Benchmarks

Road user localization and type detection

The Retinanet architecure is trained for road user localization and type detection.

The meanAP for the trained model is 0.8175.

Road user type Average Precision (AP)
car 0.9747
bus 0.9863
motorbike 0.6136
autorickshaw 0.9802
truck 0.9568
van 0.9695
pedestrian 0.2413

Visualization

Road-user Tracking

For tracking, the SORT algorithm is evaluated as a preliminary benchmark. The user-defined detections were used for tracking.

Video name Precision Recall False Acceptance Rate (FAR)
Paldi 1 7.2 7.2 19.53
Vivek 1 15.9 15.9 17.12
Nehru 1 8.8 8.8 11.53
APMC 1 8.6 8.6 15.65
Paldi 2 9.7 9.8 15.72

Visualization

Some more tracking output videos with the DeepSORT tracker are available here

License

This dataset is provided for academic and research purposes only.

People

Annotators

  • Yusuke Doi (土井 悠輔), Bachelor student, Dept. of TSE, Nihon University
  • Sho Matsunoshita (松野下 翔), Bachelor student, Dept. of TSE, Nihon University
  • Daichi Tashiro (田代 大智), Bachelor student, Dept. of TSE, Nihon University
  • Kaoru Kuga (空閑 香), Bachelor student, Dept. of TSE, Nihon University

Citation

If you use this dataset, consider citing one of our papers.

@INPROCEEDINGS{roy2020defining, 
author={D. {Roy} and Naveen Kumar {K.} and C. K. {Mohan}}, 
booktitle={2020 IEEE Intelligent Transportation Systems Conference (ITSC)}, 
title={Defining Traffic States based on Spatio-Temporal Traffic Graphs}, 
year={2020}, 
}

@article{roy2020detection,
  title={Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM},
  author={Roy, Debaditya and Ishizaka, Tetsuhiro and Mohan, C Krishna and Fukuda, Atsushi},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  year={2020},
  publisher={IEEE}
}

Acknowledgment

This work has been conducted as the part of SATREPS project M2Smart “Smart Cities development for EmergingCountries by Multimodal Transport System based on Sensing, Network and Big Data Analysis of Regional Transportation” (JPMJSA1606) funded by JST and JICA