Skip to content

Imprementation of the paper "Mobip: a lightweight model for driving perception using MobileNet"

License

Notifications You must be signed in to change notification settings

yeminghui/Mobip

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mobip

This repository includes the implementation of the paper -- MobiP: A Lightweight model for Driving Perception using MobileNet

The architecture of Mobip

Mobip is a lightweight multi-task network that can simultaneously perform traffic object detection, drivable area segmentation, and lane line detection. The model achieves an inference speed of 58 FPS on NVIDIA Tesla V100 while still maintaining competitive performance on all three tasks compared to other multi-task networks. drawing

Requirement

This code was based on python version 3.7, PyTorch 1.7+ and torchvision 0.8+:

conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch

See requirements.txt for additional dependencies.

pip install -r requirements.txt

Data preparation

Please follow the instructions in this link to download the BDD100K dataset. Also, overwrite your path to the dataset on the DATASET related params in ./lib/config/default.py

Quickstart

Check the configuration in ./lib/config/default.py and start training:

python tools/train.py

Multi GPU mode:

python -m torch.distributed.launch --nproc_per_node=N tools/train.py  # N: the number of GPUs

Evaluation

The repository have provided a checkpoint of our trained model for demonstration.

python tools/test.py --weights Checkpoints/model.pth

Acknowledgement

The implementation of Mobip was based on YOLOP and HybridNet. The authors would like to thank for their help.

About

Imprementation of the paper "Mobip: a lightweight model for driving perception using MobileNet"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages