Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
ci
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Build status:

master (production branch): Build Status develop (development branch): Build Status

Welcome to the CN24 GitHub repository!

CN24 is a complete semantic segmentation framework using fully convolutional networks. It supports a wide variety of platforms (Linux, Mac OS X and Windows) and libraries (OpenCL, Intel MKL, AMD ACML...) while providing dependency-free reference implementations. The software is developed in the Computer Vision Group at the University of Jena.

Why should I use CN24?

  1. Designed for pixel-wise labeling and semantic segmentation (train and test your own networks!)
  2. Suited for various applications in driver assistance systems, scene understanding, remote sensing, biomedical image processing and many more
  3. OpenCL support not only suited for NVIDIA GPUs
  4. High-performance implementation with minimal dependencies to other libraries

Getting started

To get started, clone this repository and visit the wiki! Installation is just a two command lines away. For an even faster introduction, check out one of these examples:

The repository contains pre-trained networks for these two applications, which are ready to use.

Licensing

CN24 is available under a 3-clause BSD license. See LICENSE for details. If you use CN24 for research, please cite our paper Clemens-Alexander Brust, Sven Sickert, Marcel Simon, Erik Rodner, Joachim Denzler. "Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding". VISAPP 2015.

Remark: The paper does not discuss the fully convolutional network adaptations integrated in CN24.

Questions?

If you have questions, feedback, or experience problems. Let us know and write an e-mail to Clemens-Alexander Brust, Sven Sickert, Marcel Simon, and Erik Rodner.

You can’t perform that action at this time.