Skip to content
Learning Superpixels with Segmentation-Aware Affinity Loss
Branch: master
Clone or download
wctu line 77: add torch.no_grad()
Add torch.no_grad() to reduce memory consumption during inference
Latest commit 66317a9 Apr 4, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data delete dummy.md Aug 21, 2018
eval Compatible with Python3 Aug 28, 2018
test line 77: add torch.no_grad() Apr 4, 2019
LICENSE.md Add LICENSE Sep 20, 2018
README.md Update Readme.md Apr 4, 2019

README.md

License CC BY-NC-SA 4.0

Learning Superpixels with Segmentation-Aware Affinity Loss

Learning Superpixels with Segmentation-Aware Affinity Loss

Wei-Chih Tu, Ming-Yu Liu, Varun Jampani, Deqing Sun, Shao-Yi Chien, Ming-Hsuan Yang, and Jan Kautz IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

Project | Paper

License

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).

Getting Started

In this repository, we provide the test code and the model trained on the BSDS500 dataset using the ERS algorithm as the superpixel segmenter. We also provide the evaluation scripts used in our experiments.

Prerequisites

  • Hardware: PC with NVIDIA GPU. We have tested the code with GeForce GTX 1080Ti and TitanXP.
  • Software: CUDA 9.1, PyTorch 0.4.1, OpenCV 3.4.2

Data Format

The superpixel labels are all integers, so we saved the superpixel labels as single-channel 16-bit png images. We can read such png files using OpenCV imread() with extra -1 flag:

img = cv2.imread('input.png', -1)

In our experiments, we preprocess all the datasets so that the segmentation ground-truth maps are also in the same 16-bit png format. In the /data folder we sample some images from the BSDS500 test set and provide their corresponding ground-truth maps in the 16-bit png format for reference.

Testing

Go to /test and run test.py. The file bsds500.pkl is the model trained on the BSDS500 dataset with the ERS algorithm. The file cityscapes.pkl is trained on Cityscapes, while we note that bsds500.pkl also generalizes well on Cityscapes. The ERSModule.so is a Python interface of the ERS algorithm. We modify the original ERS algorithm a bit so that it can take pixel affinities as input. See readme_ERS.pdf for more details.

Evaluation

We provide codes for computing the ASA (Achievable Segmentation Accuracy) and the BR (Boundary Recall) scores for superpixel evaluation. Go to /eval and run one of the two python scripts for evaluation. Make sure the input or output folder paths has been specified correctly in the python scripts. To use the eval_par.py script, you will additionally need to install the joblib package to enable multi-threading. It is particularly helpful when evaluating a large dataset along with many number of superpixels. The core evaluation functions are written in C++. The file EvalSPModule.so is the Python interface of these functions. See readme_eval.pdf for more details.

Bibtex

If you find this repository useful in your project, please cite us:

@inproceedings{Tu-CVPR-2018,
    author = {Tu, Wei-Chih and Liu, Ming-Yu and Jampani, Varun and Sun, Deqing and Chien, Shao-Yi and Yang, Ming-Hsuan and Kautz, Jan},
    title = {Learning Superpixels with Segmentation-Aware Affinity Loss},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2018},
}
You can’t perform that action at this time.