Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
CXXNET, yet another neural network toolkit
branch: master
Failed to load latest commit information.
bin add vbin
cxxnet Add Logloss metric to cxxnet_metric.h
example oops
make forget sse flag
tools add kaggle bowl
.gitignore try change ignore
LICENSE LICENSE
Makefile change predict mode
README.md Update README.md
build.sh change master build to v1.1

README.md

cxxnet

We are going to update to V2 soon. New version supports multi-GPU/distributed training and CuDNN. Stay tuned!

CXXNET (spelled as: C plus plus net) is a neural network toolkit build on mshadow(https://github.com/tqchen/mshadow). It is yet another implementation of (convolutional) neural network. It is in C++, with about 1000 lines of network layer implementations, easily configuration via config file, and can get the state of art performance.

Creater: Tianqi Chen and Bing Xu

Documentation and Tutorial: https://github.com/antinucleon/cxxnet/wiki

Features

  • Small but sharp knife: the core part of the implementation is less than 2000 lines, and easily extendible.
    • cxxnet is build with mshadow, a tensor template library for unified CPU/GPU computation. All the functions are only implemented once, as a result.
  • Speed: On Bing Xu’s EVGA GeForce 780 GTX with 2304 CUDA cores, cxxnet archived 211 images per second in training on ImageNet data with Alex Krizhevsky’s deep network structure. The prediction speed is 400 pic / second on the same card.

Build Guide

  • Common Requirement: NVIDIA CUDA with cuBLAS, cuRAND and cudaRT; OpenCV; mshadow (will be downloaded by using build.sh)
  • MKL version: Intel MKL directly run build.sh
  • If you don’t have MKL, using build.sh blas=1 to build with CBLAS
    • Depending your version of CBLAS(ATLAS, etc.), you may need to change -lblas to -lcblas in Makefile
Something went wrong with that request. Please try again.