Skip to content

Xujxyang/OpenTrans

Repository files navigation

OpenTrans

This repo contains the code for our paper Transferable and Principled Efficiency for Open-Vocabulary Segmentation (CVPR2024).


Abstract

Recent success of pre-trained foundation vision-language models makes Open-Vocabulary Segmentation (OVS) possible. Despite the promising performance, this approach introduces heavy computational overheads for two challenges: 1) large model sizes of the backbone; 2) expensive costs during the fine-tuning. These challenges hinder this OVS strategy from being widely applicable and affordable in real-world scenarios. Although traditional methods such as model compression and efficient fine-tuning can address these challenges, they often rely on heuristics. This means that their solutions cannot be easily transferred and necessitate re-training on different models, which comes at a cost. In the context of efficient OVS, we target achieving performance that is comparable to or even better than prior OVS works based on large vision-language foundation models, by utilizing smaller models that incur lower training costs. The core strategy is to make our efficiency principled and thus seamlessly transferable from one OVS framework to others without further customization. Comprehensive experiments on diverse OVS benchmarks demonstrate our superior trade-off between segmentation accuracy and computation costs over previous works.

Installation

You can configure the dataset and runtime environment based on the settings provided in fc-clip.

Getting Started

See Getting Started

Result

Model efficiency via subnetwork transfer(Resnet-50).

Model COCO Cityscapes ADE20K-150 ADE20K-847 PAS-20 PC-59 PC-459 Params FLOPs
Han et al. 46.0 33.9 16.6 2.5 71.2 39.0 7.1 44.1M 268.2G
Random(Han et al.) 37.4 28.7 13.2 2.2 60.1 33.2 5.8 22.9M 173.3G
Ours(Han et al.) 42.5 31.7 15.8 2.6 64.6 35.1 6.4 22.9M 173.3G
DeeplabV3 26.3 20.3 8.8 - 44.1 23.9 4.1 40.3M 241.3G
Random(DeeplabV3) 17.9 16.3 6.4 - 30.2 16.5 2.7 19.1M 146.8G
Ours(DeeplabV3) 34.8 24.3 10.8 - 55.2 28.9 5.2 19.1M 146.8G
FC-CLIP 58.7 53.2 23.3 7.1 89.5 50.5 12.9 39.0M 200.1G
Random(FC-CLIP) 52.8 50.0 17.2 3.2 85.5 44.8 8.7 17.8M 105.6G
Ours(FC-CLIP) 56.8 52.1 19.1 4.2 87.6 47.4 9.9 17.8M 105.6G

Efficient Fine-tuning(Resnet-50).

Model COCO Cityscapes ADE20K-150 ADE20K-847 PAS-20 PC-59 PC-459 Training FLOPs
Han et al. 46.0 33.9 16.6 2.5 71.2 39.0 7.1 181.4P
Random(Han et al.) 44.5 33.5 16.4 2.5 70.1 38.5 7.2 164.5P
$\alpha^*$(Han et al.) 45.3 33.6 16.7 2.7 73.2 39.2 7.3 172.3p
$\alpha$(Han et al.) 47.2 34.0 17.3 2.9 74.0 39.9 7.7 159.6P

Acknowledgement

Mask2Former
fc-clip
Han et al.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published