Skip to content

Jumponthemoon/Unified_Robotic_Perception

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Monocular Unified Autonomous Robotics Perception Framework

The unified autonomous robotics perception framework is an anchor-free multi-task network that can jointly handle vision perception tasks such as object detection,semantic segmentation etc.. The multi-task paradigm not only can be efficiently customized to add new vision tasks but also reduces the computation costs which is crucial to be deployed on embedded devices. The current framework supports 2D object detection and segmentation, keypoint estimation,3D task will be included in the future.

Model Architecture

image

Model parameter and inference speed

Model Size Params Speed
CSPNet+FPN 512 23.1M 45fps

Result Visualization

Requirements

  • Python >= 3.6
  • PyTorch >= 1.11
  • torchvision that matches the PyTorch installation.
  • OpenCV
  • pycocotools
# COCOAPI=/path/to/clone/cocoapi
git clone https://github.com/cocodataset/cocoapi.git $COCOAPI
cd $COCOAPI/PythonAPI
make
python setup.py install --user

Training

The training code will be realeased after the model evaluated on open-dataset.

Visualization

python demo.py perception --demo /path/to/images --load_model ../exp/perception/0920_mat/model_last.pth --save_folder path/to/save --debug 4  

TODO

  • Clean-up
  • Evaluate on open dataset
  • Support monocular 3d detection

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published