Yet Another Connected Components Labeling Benchmark
Clone or download
Latest commit d7b95e0 Dec 21, 2018

README.md

YACCLAB: Yet Another Connected Components Labeling Benchmark     Build Status

Please include the following references when citing the YACCLAB project/dataset:

  • Bolelli, Federico; Cancilla, Michele; Baraldi, Lorenzo; Grana, Costantino "Towards Reliable Experiments on the Performance of Connected Components Labeling Algorithms" Journal of Real-Time Image Processing, 2018. BibTex. Download.

  • Grana, Costantino; Bolelli, Federico; Baraldi, Lorenzo; Vezzani, Roberto "YACCLAB - Yet Another Connected Components Labeling Benchmark" Proceedings of the 23rd International Conference on Pattern Recognition, Cancun, Mexico, 4-8 Dec 2016. BibTex. Download.

YACCLAB is an open source C++ project that enables researchers to test CCL algorithms under extremely variable points of view, running and testing algorithms on a collection of datasets described below. The benchmark performs the following tests which will be described later in this readme: correctness, average run-time (average), average run-time with steps (average_ws), density, size, granularity and memory accesses (memory).

Notice that 8-connectivity is always used in the project.

Requirements

To correctly install and run YACCLAB following packages, libraries and utility are needed:

Notes for gnuplot:

  • on Windows system: be sure add gnuplot to system path if you want YACCLAB automatically generates charts.
  • on MacOS system: 'pdf terminal' seems to be not available due to old version of cairo, 'postscript' is used instead.

Installation (refer to the image below)

  • Clone the GitHub repository (HTTPS clone URL: https://github.com/prittt/YACCLAB.git) or simply download the full master branch zip file and extract it (e.g YACCLAB folder).

  • Install software in YACCLAB/bin subfolder (suggested) or wherever you want using CMake (point 2 of the example image). Note that CMake should automatically find the OpenCV path whether correctly installed on your OS (3), download the YACCLAB Dataset (be sure to check the box if you want to download it (4) or to select the correct path if the dataset is already on your file system (5)), and create a C++ project for the selected IDE/compiler (7-8).

Cmake

  • Set the configuration file (config.yaml) placed in the installation folder (bin in this example) in order to select desired tests.

  • Open the project, compile and run it: the work is done!

How to include a YACCLAB algorithm into your own project?

If your project requires a Connected Components Labeling algorithm and you are not interested in the whole YACCLAB benchmark you can use the connectedComponent function of the OpenCV library which implements the BBDT and SAUF algorithms since version 3.2.

Anyway, when the connectedComponents function is called, lot of additional code will be executed together with the core function. If your project requires the best performance you can include an algorithm implemented in YACCLAB adding the following files to your project:

  1. labeling_algorithms.h and labeling_algorithms.cc which define the base class from which every algorithm derives from.
  2. label_solver.h and label_solver.cc which cointain the implementation of labels solving algorithms.
  3. memory_tester.h and performance_evaluator.h just to make things work without changing the code.
  4. headers and sources files of the required algorithm/s. The association between algorithms and headers/sources files is reported in the table below.
Algorithm Name Authors Year Acronym Required Files Templated on Labels Solver
- L. Di Stefano,
A. Bulgarelli [3]
1999 DiStefano labeling_distefano_1999.h NO
Contour Tracing F. Chang,
C.J. Chen,
C.J. Lu [1]
1999 CT labeling_fchang_2003.h NO
Configuration Transition Based L. He,
X. Zhao,
Y. Chao,
K. Suzuki [7]
1999 CTB labeling_he_2014.h, labeling_he_2014_graph.inc YES
Scan Array-based with Union Find K. Wu,
E. Otoo,
K. Suzuki [6]
2009 SAUF labeling_wu_2009.h, labeling_wu_2009_tree.inc YES
Stripe-Based Labeling Algorithm H.L. Zhao,
Y.B. Fan,
T.X. Zhang,
H.S. Sang [8]
2010 SBLA labeling_zhao_2010.h NO
Block-Based with Decision Tree C. Grana,
D. Borghesani,
R. Cucchiara [4]
2010 BBDT labeling_grana_2010.h, labeling_grana_2010_tree.inc YES
Block-Based with Binary Decision Trees W.Y. Chang,
C.C. Chiu,
J.H. Yang [2]
2015 CCIT labeling_wychang_2015.h, labeling_wychang_2015_tree.inc, labeling_wychang_2015_tree_0.inc YES
Light Speed Labeling L. Cabaret,
L. Lacassagne,
D. Etiemble [5]
2016 LSL_STDI
LSL_STDZII
LSL_RLEIII
labeling_lacassagne_2016.h, labeling_lacassagne_2016_code.inc YESIV
Pixel Prediction C.Grana,
L. Baraldi,
F. Bolelli [9]
2016 PRED labeling_grana_2016.h, labeling_grana_2016_forest.inc, labeling_grana_2016_forest_0.inc YES
Directed Rooted Acyclic Graph F. Bolelli,
L. Baraldi,
C. Grana [19]
2018 DRAG labeling_bolelli_2018.h, labeling_grana_2018_drag.inc YES
Null Labeling F. Bolelli,
M. Cancilla,
L. Baraldi,
C. Grana [18]
2018 NULLV labeling_null.h NO

(I) standard version
(II) with zero-offset optimization
(III) with RLE compression
(IV) only on TTA and UF
(V) it only copies the pixels from the input image to the output one simply defining a lower bound limit for the execution time of CCL algorithms on a given machine and dataset.

Example of Algorithm Usage Outside the Benchmark

#include "labels_solver.h"
#include "labeling_algorithms.h"
#include "labeling_grana_2010.h" // To include the algorithm code (BBDT in this example)

#include <opencv2/opencv.hpp>

using namespace cv;

int main()
{
    BBDT<UFPC> BBDT_UFPC; // To create an object of the desired algorithm (BBDT in this example)
                          // templated on the labels solving strategy. See the README for the
                          // complete list of the available labels solvers, available algorithms 
                          // (N.B. non all the algorithms are templated on the solver) and their 
                          // acronyms.
    
    BBDT_UFPC.img_ = imread("test_image.png", IMREAD_GRAYSCALE); // To load into the CCL object 
                                                                 // the BINARY image to be labeled 

    threshold(BBDT_UFPC.img_, BBDT_UFPC.img_, 100, 1, THRESH_BINARY); // Just to be sure that the
                                                                      // loaded image is binary

    BBDT_UFPC.PerformLabeling(); // To perform Connected Components Labeling!

    Mat1i output = BBDT_UFPC.img_labels_; // To get the output labeled image  
    unsigned n_labels = BBDT_UFPC.n_labels_; // To get the number of labels found in the input img

    return EXIT_SUCCESS;
}

Configuration File

A YAML configuration file placed in the installation folder lets you to specify which kind of tests should be performed, on which datasets and on which algorithms. A complete description of all configuration parameters is reported below.

  • perform - dictionary which specifies the kind of tests to perform:
perform: 
  correctness:        false
  average:            true
  average_with_steps: false
  density:            false
  granularity:        false
  memory:             false
  • correctness_tests - dictionary indicating the kind of correctness tests to perform:
correctness_tests: 
  eight_connectivity_standard: true
  eight_connectivity_steps:    true
  eight_connectivity_memory:   true
  • tests_number - dictionary which sets the number of runs for each test available:
tests_number: 
  average:            10 
  average_with_steps: 10
  density:            10
  granularity:        10 
  • algorithms - list of algorithms on which apply the chosen tests:
algorithms: 
  - SAUF_RemSP
  - SAUF_TTA
  - BBDT_RemSP
  - BBDT_UFPC
  - CT
  - labeling_NULL
  • check_datasets, average_datasets, average_ws_datasets and memory_datasets - lists of datasets on which, respectively, correctness, average, average_ws and memory tests should be run:
...
average_datasets: ["3dpes", "fingerprints", "hamlet", "medical", "mirflickr", "tobacco800", "xdocs"]
...
  • paths - dictionary with both input (datasets) and output (results) paths. It is automatically filled by Cmake during the creation of the project:
paths: {input: "<datasets_path>", output: "<output_results_path>"}
  • write_n_labels - whether to report the number of connected components in the output files:
write_n_labels: false
  • color_labels - whether to output a colored version of labeled images during tests:
color_labels: {average: false, density: false}
  • save_middle_tests - dictionary specifying, separately for every test, whether to save the output of single runs, or only a summary of the whole test:
save_middle_tests: {average: false, average_with_steps: false, density: false, granularity: false}

How to Extend YACCLAB with New Algorithms

Work in progress.

The YACCLAB Dataset

The YACCLAB dataset includes both synthetic and real images and it is suitable for a wide range of applications, ranging from document processing to surveillance, and features a significant variability in terms of resolution, image density, variance of density, and number of components. All images are provided in 1 bit per pixel PNG format, with 0 (black) being background and 1 (white) being foreground. The dataset will be automatically downloaded by CMake during the installation process as described in the installation paragraph, or it can be found at http://imagelab.ing.unimore.it/yacclab. Images are organized by folders as follows:

  • Synthetic Images:

    • Classical:4

      A set of synthetic random noise images who contain black and white random noise with 9 different foreground densities (10% up to 90%), from a low resolution of 32x32 pixels to a maximum resolution of 4096x4096 pixels, allowing to test the scalability and the effectiveness of different approaches when the number of labels gets high. For every combination of size and density, 10 images are provided for a total of 720 images. The resulting subset allows to evaluate performance both in terms of scalability on the number of pixels and on the number of labels (density).

    • Granularity:4

      This dataset allows to test algorithms varying not only the pixels density but also their granularity g (i.e., dimension of minimum foreground block), underlying the behaviour of different proposals when the number of provisional labels changes. All the images have a resolution of 2048x2048 and are generated with the Mersenne Twister MT19937 random number generator implemented in the C++ standard and starting with a "seed" equal to zero. Density of the images ranges from 0% to 100% with step of 1% and for every density value 16 images with pixels blocks of gxg with g ∈ [1,16] are generated. Moreover, the procedure has been repeated 10 times for every couple of density-granularity for a total of 16160 images.

  • MIRflickr:10

    Otsu-binarized version of the MIRflickr dataset, publicly available under a Creative Commons License. It contains 25,000 standard resolution images taken from Flickr. These images have an average resolution of 0.17 megapixels, there are few connected components (495 on average) and are generally composed of not too complex patterns, so the labeling is quite easy and fast.

  • Hamlet:

    A set of 104 images scanned from a version of the Hamlet found on Project Gutenberg (http://www.gutenberg.org). Images have an average amount of 2.71 million of pixels to analyze and 1447 components to label, with an average foreground density of 0.0789.

  • Tobacco800:11,12,13

    A set of 1290 document images. It is a realistic database for document image analysis research as these documents were collected and scanned using a wide variety of equipment over time. Resolutions of documents in Tobacco800 vary significantly from 150 to 300 DPI and the dimensions of images range from 1200 by 1600 to 2500 by 3200 pixels. Since CCL is one of the initial preprocessing steps in most layout analysis or OCR algorithms, hamlet and tobacco800 allow to test the algorithm performance in such scenarios.

  • 3DPeS:14

    It comes from 3DPeS (3D People Surveillance Dataset), a surveillance dataset designed mainly for people re-identification in multi camera systems with non-overlapped fields of view. 3DPeS can be also exploited to test many other tasks, such as people detection, tracking, action analysis and trajectory analysis. The background models for all cameras are provided, so a very basic technique of motion segmentation has been applied to generate the foreground binary masks, i.e., background subtraction and fixed thresholding. The analysis of the foreground masks to remove small connected components and for nearest neighbor matching is a common application for CCL.

  • Medical:15

    This dataset is composed by histological images and allow us to cover this fundamental medical field. The process used for nuclei segmentation and binarization is described in [12]. The resulting dataset is a collection of 343 binary histological images with an average amount of 1.21 million of pixels to analyze and 484 components to label.

  • Fingerprints:16

    This dataset counts 960 fingerprint images collected by using low-cost optical sensors or synthetically generated. These images were taken from the three Verification Competitions FCV2000, FCV2002 and FCV2004. In order to fit CCL application, fingerprints have been binarized using an adaptive threshold and then negated in order to have foreground pixel with value 255. Most of the original images have a resolution of 500 DPI and their dimensions range from 240 by 320 up to 640 by 480 pixels.

Available Tests

  • Average run-time tests:

    execute an algorithm on every image of a dataset. The process can be repeated more times in a single test, to get the minimum execution time for each image: this allows to get more reproducible results and overlook delays produced by other running processes. It is also possible to compare the execution speed of different algorithms on the same dataset: in this case, selected algorithms (see Configuration File for more details) are executed sequentially on every image of the dataset. Results are presented in three different formats: a plain text file, histogram charts (.pdf/.ps), either in color or in gray-scale, and a LaTeX table, which can be directly included in research papers.

  • Density and size tests:

    check the performance of different CCL algorithms when they are executed on images with varying foreground density and size. To this aim, a list of algorithms selected by the user is run sequentially on every image of the test_random dataset. As for run-time tests, it is possible to repeat this test for more than one run. The output is presented as both plain text and charts(.pdf/.ps). For a density test, the mean execution time of each algorithm is reported for densities ranging from 10% up to 90%, while for a size test the same is reported for resolutions ranging from 32x32 up to 4096x4096.

  • Memory tests:

    are useful to understand the reason for the good performances of an algorithm or in general to explain its behavior. Memory tests compute the average number of accesses to the label image (i.e the image used to store the provisional and then the final labels for the connected components), the average number of accesses to the binary image to be labeled, and, finally, the average number of accesses to data structures used to solve the equivalences between label classes. Moreover, if an algorithm requires extra data, memory tests summarize them as ``other'' accesses and return the average. Furthermore, all average contributions of an algorithm and dataset are summed together in order to show the total amount of memory accesses. Since counting the number of memory accesses imposes additional computations, functions implementing memory access tests are different from those implementing run-time and density tests, to keep run-time tests as objective as possible.

Examples of YACCLAB Output Results

Work in progress.

References

[1] F. Chang, C.-J. Chen, and C.-J. Lu, “A linear-time component-labeling algorithm using contour tracing technique,” Computer Vision and Image Understanding, vol. 93, no. 2, pp. 206–220, 2004.

[2] W.-Y. Chang, C.-C. Chiu, and J.-H. Yang, “Block-based connected-component labeling algorithm using binary decision trees,” Sensors, vol. 15, no. 9, pp. 23 763–23 787, 2015.

[3] L. Di Stefano and A. Bulgarelli, “A Simple and Efficient Connected Components Labeling Algorithm,” in International Conference on Image Analysis and Processing. IEEE, 1999, pp. 322–327.

[4] C. Grana, D. Borghesani, and R. Cucchiara, “Optimized Block-based Connected Components Labeling with Decision Trees,” IEEE Transac-tions on Image Processing, vol. 19, no. 6, pp. 1596–1609, 2010.

[5] L. Lacassagne and B. Zavidovique, “Light speed labeling: efficient connected component labeling on risc architectures,” Journal of Real-Time Image Processing, vol. 6, no. 2, pp. 117–135, 2011.

[6] K. Wu, E. Otoo, and K. Suzuki, Optimizing two-pass connected-component labeling algorithms,” Pattern Analysis and Applications, vol. 12, no. 2, pp. 117–135, 2009.

[7] L. He, X. Zhao, Y. Chao, and K. Suzuki, Configuration-Transition- Based Connected-Component Labeling, IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 943–951, 2014.

[8] H. Zhao, Y. Fan, T. Zhang, and H. Sang, Stripe-based connected components labelling, Electronics letters, vol. 46, no. 21, pp. 1434–1436, 2010.

[9] C. Grana, L. Baraldi, and F. Bolelli, Optimized Connected Components Labeling with Pixel Prediction, in Advanced Concepts for Intelligent Vision Systems, 2016.

[10] M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval. New York, NY, USA: ACM, 2008. [Online]. Available: http://press.liacs.nl/mirflickr/

[11] G. Agam, S. Argamon, O. Frieder, D. Grossman, and D. Lewis, “The Complex Document Image Processing (CDIP) Test Collection Project,” Illinois Institute of Technology, 2006. [Online]. Available: http://ir.iit.edu/projects/CDIP.html

[12] D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, “Building a test collection for complex document information processing,” in Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2006, pp. 665–666.

[13] “The Legacy Tobacco Document Library (LTDL),” University of California, San Francisco, 2007. [Online]. Available: http://legacy. library.ucsf.edu/

[14] D. Baltieri, R. Vezzani, and R. Cucchiara, “3DPeS: 3D People Dataset for Surveillance and Forensics,” in Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding. ACM, 2011, pp. 59–64.

[15] F. Dong, H. Irshad, E.-Y. Oh, M. F. Lerwill, E. F. Brachtel, N. C. Jones, N. W. Knoblauch, L. Montaser-Kouhsari, N. B. Johnson, L. K. Rao et al., “Computational Pathology to Discriminate Benign from Malignant Intraductal Proliferations of the Breast,” PloS one, vol. 9, no. 12, p. e114885, 2014.

[16] D. Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook of fingerprint recognition. Springer Science & Business Media, 2009.

[17] C.Grana, F.Bolelli, L.Baraldi, and R.Vezzani, YACCLAB - Yet Another Connected Components Labeling Benchmark, Proceedings of the 23rd International Conference on Pattern Recognition, Cancun, Mexico, 4-8 Dec 2016.

[18] Bolelli, Federico; Cancilla, Michele; Baraldi, Lorenzo; Grana, Costantino "Towards Reliable Experiments on the Performance of Connected Components Labeling Algorithms" Journal of Real-Time Image Processing, 2018.

[19] Bolelli, Federico; Baraldi, Lorenzo; Cancilla, Michele; Grana, Costantino "Connected Components Labeling on DRAGs" Proceedings of the 23rd International Conference on Pattern Recognition, Beijing, China, 20-24 Aug 2018.