Skip to content

Latest commit

 

History

History
42 lines (28 loc) · 1.56 KB

File metadata and controls

42 lines (28 loc) · 1.56 KB

SeMask-MaskFormer

This repo contains the code for our paper SeMask: Semantically Masked Transformers for Semantic Segmentation. It is based on MaskFormer.

Contents

  1. Results
  2. Setup Instructions
  3. Citing SeMask

1. Results

  • † denotes the backbones were pretrained on ImageNet-22k and 384x384 resolution images.
  • Pre-trained models can be downloaded following the instructions given under tools.

ADE20K

Method Backbone Crop Size mIoU mIoU (ms+flip) #params config Checkpoint
SeMask-L MaskFormer SeMask Swin-L 640x640 54.75 56.15 219M config checkpoint

2. Setup Instructions

Installation

See installation instructions.

Getting Started

See Preparing Datasets for MaskFormer.

See Getting Started with MaskFormer.

3. Citing SeMask

@article{jain2021semask,
  title={SeMask: Semantically Masking Transformer Backbones for Effective Semantic Segmentation},
  author={Jitesh Jain and Anukriti Singh and Nikita Orlov and Zilong Huang and Jiachen Li and Steven Walton and Humphrey Shi},
  journal={arXiv},
  year={2021}
}