Skip to content

Kapernikov/smartagrihubs-segmentation-demo

Repository files navigation

Semantic segmentation for defect detection

This project is an implementation of the U-Net architecture for defect detection using semantic segmentation.

Table of contents

Installation

Use pipenv to install virtual environment:

pipenv shell

Quick start: training and evaluation

Training and inference for defect detection in hazelnuts is described in notebooks with appropriate titles.

Alternatively, one can run the whole pipeline in command line:

python main.py

We also provide a sample model for inference located in models_versioning/whiny_red_mastiff.

Dataset structure

For our purposes, we worked with the private dataset. Similar results can be obtained on the hazelnut subset of the MVTEC anomaly detection dataset.

We assume the images and masks to be arranged in the following structure:

data
├── images
│   └── img.png
└── masks
    ├── class1_object
    │   └── img.png
    └── class2_object
        └── img.png       

Here data is a path specified in a data variable in configs/env.yaml. Each image should have an independent mask for each class. The corresponding masks should be placed in the folders with class names.

MVTec dataset references:

  • Paul Bergmann, Kilian Batzner, Michael Fauser, David Sattlegger, Carsten Steger: The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection; in: International Journal of Computer Vision 129(4):1038-1059, 2021, DOI: 10.1007/s11263-020-01400-4.

  • Paul Bergmann, Michael Fauser, David Sattlegger, Carsten Steger: MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection; in: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9584-9592, 2019, DOI: 10.1109/CVPR.2019.00982.

About

Semantic segmentation demonstration for the SmartAgriHubs RAINaDiv EXPAND project

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published