Skip to content

[CVPR2022] Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation

License

Notifications You must be signed in to change notification settings

megvii-research/TreeEnergyLoss

Repository files navigation

Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation

Introduction

This repository is an official implementation of the CVPR 2022 paper Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation.

TEL

Abstract. Sparsely annotated semantic segmentation (SASS) aims to train a segmentation network with coarse-grained (i.e., point-, scribble-, and block-wise) supervisions, where only a small proportion of pixels are labeled in each image. In this paper, we propose a novel tree energy loss for SASS by providing semantic guidance for unlabeled pixels. The tree energy loss represents images as minimum spanning trees to model both low-level and high-level pair-wise affinities. By sequentially applying these affinities to the network prediction, soft pseudo labels for unlabeled pixels are generated in a coarse-to-fine manner, achieving dynamic online self-training. The tree energy loss is effective and easy to be incorporated into existing frameworks by combining it with a traditional segmentation loss.

News

(03/03/2022) Tree Energy Loss has been accepted by CVPR 2022.

(15/03/2022) Update codes and models.

Main Results

Method Backbone Dataset Annotation mIoU Model
HRNet HRNet_w48 Cityscapes block50 72.2 google
HRNet HRNet_w48 Cityscapes block20 66.8 google
HRNet HRNet_w48 Cityscapes block10 61.8 google
HRNet HRNet_w48 ADE20k block50 40.3 google
HRNet HRNet_w48 ADE20k block20 36.5 google
HRNet HRNet_w48 ADE20k block10 34.7 google
DeeplabV3+ ResNet101 VOC2012 point 65.4 google
LTF ResNet101 VOC2012 point 68.0 google
DeeplabV3+ ResNet101 VOC2012 scribble 77.6 google
LTF ResNet101 VOC2012 scribble 77.4 google

Requirements

  • Linux, Python>=3.6, CUDA>=10.0, pytorch == 1.7.1

Installation

This implementation is built upon openseg.pytorch and TreeFilter-Torch. Many thanks to the authors for the efforts.

Sparse Annotation Preparation

and finally, the dataset directory should look like:

$DATA_ROOT
├── cityscapes
│   ├── train
│   │   ├── image
│   │   ├── label
│   │   └── sparse_label
│   │       ├── block10
│   │       ├── block20
│   │       └── block50
│   ├── val
│   │   ├── image
│   │   └── label
├── ade20k
│   ├── train
│   │   ├── image
│   │   ├── label
│   │   └── sparse_label
│   │       ├── block10
│   │       ├── block20
│   │       └── block50
│   ├── val
│   │   ├── image
│   │   └── label
├── voc2012
│   ├── voc_scribbles.zip
│   ├── voc_whats_the_point.json
│   └── voc_whats_the_point_bg_from_scribbles.json

Block-Supervised Setting

(1) To evaluate the released models:

bash scripts/cityscapes/hrnet/demo.sh val block50

(2) To train and evaluate your own models:

bash scripts/cityscapes/hrnet/train.sh train model_name

bash scripts/cityscapes/hrnet/train.sh val model_name

Point-supervised and Scribble-supervised Settings

(1) To evaluate the released models:

bash scripts/voc2012/deeplab/demo.sh val scribble

(2) To train and evaluate your own models:

bash scripts/voc2012/deeplab/train.sh train model_name

bash scripts/voc2012/deeplab/train.sh val model_name