Skip to content

xinzwang/LMOT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

Multi-Object Tracking in the Dark

The official repo of the CVPR 2024 paper Multi-Object Tracking in the Dark

LMOT

News

  • [2024-06] 🔥 LMOT Dataset are available now !

Abstract

Low-light scenes are prevalent in real-world applications (e.g. autonomous driving and surveillance at night). Recently, multi-object tracking in various practical use cases have received much attention, but multi-object tracking in dark scenes is rarely considered. In this paper, we focus on multi-object tracking in dark scenes. To address the lack of datasets, we first build a Low-light Multi-Object Tracking (LMOT) dataset. LMOT provides well-aligned low-light video pairs captured by our dual-camera system, and high-quality multi-object tracking annotations for all videos. Then, we propose a low-light multi-object tracking method, termed as LTrack. We introduce the adaptive lowpass downsample module to enhance low-frequency components of images outside the sensor noises. The degradation suppression learning strategy enables the model to learn invariant information under noise disturbance and image quality degradation. These components improve the robustness of multi-object tracking in dark scenes. We conducted a comprehensive analysis of our LMOT dataset and proposed LTrack. Experimental results demonstrate the superiority of the proposed method and its competitiveness in real night low-light scenes

Dataset

Construction

LMOT dataset is collected using our dual-camera sysyem, which provide well-aligned low-light and well-lit video pairs (LMOT-dual). We also collect a real low-light MOT dataset to evaluate performance in real night dark scene, which are captured using a single camera with the same camera settings (LMOT-real).

We provide the dataset in both RAW (RGGB) and sRGB format, with 20FPS, 10ms exposure time, and $1800\times1000$ resolution. LMOT dataset contains a variety of city outdoor scenes, including roads, overpasses, pedestrians, and intersection. We annotate six types of moving objects, including car, person, bicycle, motorcycle, bus, and truck. All annotations are carefully reviewed.

Statistics

Detailed statics and data splits for LMOT dataset

Dataset Split Videos Bbox Tracks Paired Well-lit
LMOT-dual train 11 309,466 1,533
val 4 131,781 626
test 11 312,742 1,644
LMOT-real real 6 61,561 287

Download

LMOT dataset can be downloaded from Baidu Drive (code:xedx).

Note: Currently, we only release the training set and validation set. Test set and the remaining part will be released later along with the challenges.

Organize the files into the following structure.

{LMOT ROOT}
└── LMOT_release/
    ├── train
        ├── LMOT-02
            ├── gt
                └── gt.txt
            ├── img_dark
                ├── 000001.tiff
                └──  ... 
            ├── img_dark_rgb
                ├── 000001.tiff
                └──  ... 
            ├── img_light_rgb
                ├── 000001.jpg
                └──  ... 
            ├── img_light
                ├── 000001.tiff
                └──  ... 
            └── seqinfo.ini
        ├── LMOT-04
        └── ...
    ├── val
        └── ...
    ├── test
        └── ...
    └── real
        ├── RLMOT-01
            ├── gt
                └── gt.txt
            ├── img_real
                ├── 000001.tiff
                    ... 
            ├── img_real_rgb
                ├── 000001.jpg
                    ... 
            └── seqinfo.ini
        └── ...

LMOT dataset is organized in the form of MOT Challenge 17. Each line in gt.txt contains

fn, id, x1, y1, w, h, ignore, classid, vis_ratio

The 6 categories of annotated objects include

'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck'

People

In addition to the authors of the paper, some friends also helped with data collection and annotation, they are Li Yichen, Wang Binfeng, Wang Haoyu, Wang Yuran, Zhang Taoying, and Wang Jianan. We sincerely thank them for their contributions to this work.

Agreement

Citation

If you our dataset or code for research, please cite our paper:

@InProceedings{wang2024lmot,
    author    = {Wang, Xinzhe and Ma, Kang and Liu, Qiankun and Zou, Yunhao and Fu, Ying},
    title     = {Multi-Object Tracking in the Dark},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2024},
    pages     = {382-392}
}

Contact

If you have any question about our dataset, please email to wangxinzhe@bit.edu.cn.

About

[CVPR2024] Multi-Object Tracking in the Dark

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published