Skip to content
This repository has been archived by the owner. It is now read-only.
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
bin
 
 
 
 
 
 
 
 
 
 
 
 
src
 
 
 
 
zi
 
 
 
 
 
 
 
 
 
 
 
 

README.md

ZNN

Build Status

ZNN is a multicore CPU implementation of deep learning for sliding window convolutional networks. This type of ConvNet is used for image-to-image transformations, i.e., it produces dense output. Use of ZNN is currently deprecated, except for the special case of 3D kernels that are 5x5x5 or larger. In this case, the FFT-based convolutions of ZNN are able to compensate for the lower FLOPS of most CPUs relative to GPUs.

For most dense output ConvNet applications, we are currently using https://github.com/torms3/DataProvider with Caffe or TensorFlow.

ZNN will be superseded by ZNNphi, which is currently in preparation. By vectorizing direct convolutions, ZNNphi more efficiently utilizes the FLOPS of multicore CPUs for 2D and small 3D kernels. This includes manycore CPUs like Xeon Phi Knights Landing, which has narrowed the FLOPS gap with GPUs.

Resources

Publications

  • Zlateski, A., Lee, K. & Seung, H. S. (2015) ZNN - A Fast and Scalable Algorithm for Training 3D Convolutional Networks on Multi-Core and Many-Core Shared Memory Machines. (arXiv link)
  • Lee, K., Zlateski, A., Vishwanathan, A. & Seung, H. S. (2015) Recursive Training of 2D-3D Convolutional Networks for Neuronal Boundary Detection. (arXiv link)

Contact

C++ core

Python Interface

About

Multi-core CPU implementation of deep learning for 2D and 3D sliding window convolutional networks (ConvNets).

Resources

License

Packages

No packages published
You can’t perform that action at this time.