Superpixels: An Evaluation of the State-of-the-Art
This repository contains the source code used for evaluation in , a large-scale comparison of state-of-the-art superpixel algorithms.
Please cite the following work if you use this benchmark or the provided tools or implementations:
 D. Stutz, A. Hermans, B. Leibe. Superpixels: An Evaluation of the State-of-the-Art. Computing Research Repository, abs/1612.01601.
Also make also sure to cite additional papers when using datasets or superpixel algorithms.
- An implementation of the average metrics, i.e. Average Boundary Recall (called
Average Miss Rate in the updated paper), Average Undersegmentation Error
and Average Explained Variation (called Average Unexplained Variation in the updated paper)
is provided in
lib_eval/evaluation.hand an easy-to-use command line tool is provided, see
eval_average_cliand the corresponding documentation and examples in Executables and Examples respectively.
- As of Mar 29, 2017 the paper was accepted for publication at CVIU.
- The converted (i.e. pre-processed) NYUV2, SBD and SUNRGBD datasets are now available in the data repository.
- The source code of MSS has been added.
- The source code of PF and SEAW has been added.
- Doxygen documentation is now available here.
- The presented paper was in preparation for a longer period of time — some recent superpixel algorithms are not included in the comparison. These include SCSP and LRW.
Table of Contents
Superpixels group pixels similar in color and other low-level properties. In this respect, superpixels address two problems inherent to the processing of digital images: firstly, pixels are merely a result of discretization; and secondly, the high number of pixels in large images prevents many algorithms from being computationally feasible. Superpixels were introduced as more natural entities - grouping pixels which perceptually belong together while heavily reducing the number of primitives.
This repository can be understood as supplementary material for an extensive evaluation of 28 algorithms on 5 datasets regarding visual quality, performance, runtime, implementation details and robustness - as presented in . To ensure a fair comparison, parameters have been optimized on separate training sets; as the number of generated superpixels heavily influences parameter optimization, we additionally enforced connectivity. Furthermore, to evaluate superpixel algorithms independent of the number of superpixels, we propose to integrate over commonly used metrics such as Boundary Recall, Undersegmentation Error and Explained Variation. Finally, we present a ranking of the superpixel algorithms considering multiple metrics and independent of the number of generated superpixels, as shown below.
The table shows the average ranks across the 5 datasets, taking into account Average Boundary Recall (ARec) and Average Undersegmentation Error (AUE) - lower is better in both cases, see Benchmark. The confusion matrix shows the rank distribution of the algorithms across the datasets.
The following algorithms were evaluated in , and most of them are included in this repository:
|CCS||Ref. & Web|
|Instructions||CIS||Ref. & Web|
|CRS||Ref. & Web|
|CW||Ref. & Web|
|DASP||Ref. & Web|
|EAMS||Ref., Ref., Ref. & Web|
|ERS||Ref. & Web|
|FH||Ref. & Web|
|PB||Ref. & Web|
|preSLIC||Ref. & Web|
|SEAW||Ref. & Web|
|SEEDS||Ref. & Web|
|SLIC||Ref. & Web|
|TP||Ref. & Web|
|TPS||Ref. & Web|
|WP||Ref. & Web|
|PF||Ref. & Web|
|LSC||Ref. & Web|
|RW||Ref. & Web|
|QS||Ref. & Web|
|NC||Ref. & Web|
|VCCS||Ref. & Web|
|POISE||Ref. & Web|
|VC||Ref. & Web|
|ETPS||Ref. & Web|
|ERGC||Ref., Ref. & Web|
To keep the benchmark alive, we encourage authors to make their implementations publicly available and integrate them into this benchmark. We are happy to help with the integration and update the results published in  and on the project page. Also see the Documentation for details.
Further, note that the additional dataset downloads as in Datasets follow the licenses of the original datasets.
The remaining source code provided in this repository is licensed as follows:
Copyright (c) 2016, David Stutz All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.