Skip to content

LiChenyang-Github/LongShortNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LongShortNet: Exploring Temporal and Semantic Features Fusion in Streaming Perception

In the sphere of autonomous driving, streaming perception plays a pivotal role. It's vital to achieve a fine balance between the system's latency and accuracy. LongShortNet emerges as an innovative model, intertwining both long-term temporal dynamics and short-term spatial semantics, hence fostering enhanced real-time perception. This fusion leads to a model that promises greater efficacy in complex autonomous driving scenarios.

Figure 1
Fig. 1: (a) Offline detection (VOD) vs. streaming perception. Streaming perception operates in real-time, adapting swiftly to motion changes. (b) A timeline depicting processing time.

Figure 2
Fig. 2: (a) StreamYOLO's performance comparison, with red and orange boxes denoting ground truth and predictions respectively. (b) sAP comparison between LongShortNet and StreamYOLO. For more visuals, visit [here](https://rebrand.ly/wgtcloo).

Methodology

LongShortNet embarks on a unique fusion strategy, the Long Short Fusion Module(LSFM), which is the crux of our architecture. LSFM employs multiple fusion schemes to ensure the effective amalgamation of features, providing the network the ability to react dynamically to real-time changes. The essence of LongShortNet is its capability to seamlessly combine temporal and semantic features. The methodology is split into two main components as illustrated below:

Figure 3a
Fig. 3(a): Overview of the LongShortNet framework. Fig. 3(b): A detailed view of the fusion schemes in LSFM (LongShort Feature Module).

For a more comprehensive understanding of the approach and its benefits, please take a look at our ICASSP paper.

Benchmark

Model Size Velocity sAP
0.5:0.95
sAP50 sAP75 Weights
LongShortNet-S 600×960 1x 29.8 50.4 29.5 link
LongShortNet-M 600×960 1x 34.1 54.8 34.6 link
LongShortNet-L 600×960 1x 37.1 57.8 37.7 link
LongShortNet-L 1200×1920 1x 42.7 65.4 45.0 link

Quick Start

Installation

You can refer to StreamYOLO to install the whole environments.

Train

We use COCO models offered by StreamYOLO as our pretrained models.

bash run_train.sh

Evaluation

bash run_eval.sh

Citation

Please cite the following paper if this repo helps your research:

@inproceedings{li2023longshortnet,
  title={Longshortnet: Exploring temporal and semantic features fusion in streaming perception},
  author={Li, Chenyang and Cheng, Zhi-Qi and He, Jun-Yan and Li, Pengyu and Luo, Bin and Chen, Hanyuan and Geng, Yifeng and Lan, Jin-Peng and Xie, Xuansong},
  booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={1--5},
  year={2023},
  organization={IEEE}
}

Acknowledgment

A major part of this project's foundation is built on StreamYOLO. We extend our gratitude to the authors. We are also grateful to Alibaba Group's DAMO Academy for their invaluable support.

License

LongShortNet is under the Apache 2.0 license. Refer to the LICENSE file for more details.