Skip to content

hujiecpp/YOSO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the project page for paper:

You Only Segment Once: Towards Real-Time Panoptic Segmentation, In CVPR 2023.

Model Zoo

On COCO validation set:

Backbone Scale PQ FPS GPU Model
R50 800,1333 48.4 23.6 V100 model
R50 512,800 46.4 45.6 V100 model

On Cityscapes validation set:

Backbone Scale PQ FPS GPU Model
R50 1024,2048 59.7 11.1 V100 model
R50 512,1024 52.5 22.6 V100 model

On ADE20k validation set:

Backbone Scale PQ FPS GPU Model
R50 640,2560 38.0 35.4 V100 model

On Mapillary Vistas validation set:

Backbone Scale PQ FPS GPU Model
R50 2048,2048 34.1 7.1 A100 model

Getting Started

Installation

We recommend to use Anaconda for installation.

conda create -n YOSO python=3.8 -y
conda activate YOSO
conda install pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch
pip install pycocotools -i https://pypi.douban.com/simple
pip install git+https://github.com/cocodataset/panopticapi.git
git clone https://github.com/hujiecpp/YOSO.git
cd YOSO
python setup.py develop

Datasets Preparation

See Preparing Datasets for Mask2Former.

Training & Evaluation

  • Train YOSO (e.g., on COCO dataset with R50 backbone).
python projects/YOSO/train_net.py --num-gpus 4 --config-file projects/YOSO/configs/coco/panoptic-segmentation/YOSO-R50.yaml
  • Evaluate YOSO (e.g., on COCO dataset with R50 backbone).
python projects/YOSO/train_net.py --num-gpus 4 --config-file projects/YOSO/configs/coco/panoptic-segmentation/YOSO-R50.yaml --eval-only MODEL.WEIGHTS ./model_zoo/yoso_res50_coco.pth

Inference on Custom Image or Video

  • Run YOSO demo (e.g., on video with R50 backbone).
python demo/demo.py --config-file projects/YOSO/configs/coco/panoptic-segmentation/YOSO-R50.yaml --video-input input_video.mp4 --output output_video.mp4 --opts MODEL.WEIGHTS ./model_zoo/yoso_res50_coco.pth

Acknowledgements

Citing YOSO

If YOSO helps your research, please cite it in your publications:

@inproceedings{hu2023you,
  title={You Only Segment Once: Towards Real-Time Panoptic Segmentation},
  author={Hu, Jie and Huang, Linyan and Ren, Tianhe and Zhang, Shengchuan and Ji, Rongrong and Cao, Liujuan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={17819--17829},
  year={2023}
}

About

Code release for paper "You Only Segment Once: Towards Real-Time Panoptic Segmentation" [CVPR 2023]

Resources

License

Stars

Watchers

Forks

Packages

No packages published