Skip to content
BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet
C++ Python Jupyter Notebook Perl Scala Cuda Other
Branch: master
Clone or download
Latest commit ed0b201 Jun 5, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Update PR & Issue Template (#8555) Nov 10, 2017
R-package [Merge into v1.0.0 ONLY][Copy of PR #8704] Prep1.0: bump the version … Nov 22, 2017
amalgamation Merge branch 'feature/v1_merge' into dev-xnor Dec 22, 2017
benchmark/python/sparse LibsvmIter Doc Updates (#8111) Oct 1, 2017
cmake initial merge with v1 Dec 5, 2017
cpp-package [cpp-package]Update readme#8655 (#8746) Nov 21, 2017
cub @ 05eb57f Update cub for CUDA 9 (#7270) Aug 1, 2017
dlpack @ a6e09b5 Change Interface of NDArray & TBlob for DLPack Compatible (#6345) May 30, 2017
dmlc-core @ 87b7ffa multi processing and fork fix (#8677) Nov 16, 2017
docker License fixes (#8873) Nov 30, 2017
docker_multiarch Multiplatform docker based builds (#7792) Oct 13, 2017
docs License fixes (#8873) Nov 30, 2017
example initial merge with v1 Dec 5, 2017
include/mxnet [Merge into v1.0.0 ONLY][Copy of PR #8704] Prep1.0: bump the version … Nov 22, 2017
make NCCL integration (#8294) Nov 21, 2017
matlab License fixes (#8873) Nov 30, 2017
mshadow @ 2d7780c Update mshadow to 2d7780c3f2eefe4453fa419862d1b2089bedb8d5 (#8516) Nov 8, 2017
nnvm @ e4a138a License fixes (#8873) Nov 30, 2017
perl-package Get ptb script change for licensing issues (#8899) Nov 30, 2017
plugin Restored some copyright attribution that were accidentally removed. (… Nov 19, 2017
ps-lite @ 2ce8b9a Updating ps-lite submodule (#8769) Nov 22, 2017
python fix check (#8791) Nov 23, 2017
scala-package License fixes (#8873) Nov 30, 2017
setup-utils License fixes (#8873) Nov 30, 2017
smd_hpi Change redone May 28, 2019
src changed threshold value of det_sign_grad function Jan 2, 2018
tests added xnor cuda impl. Jan 11, 2019
tools License fixes (#8873) Nov 30, 2017
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore initial merge with v1 Dec 5, 2017
.gitmodules Revert "switch mshadow submodule to own repo" Oct 19, 2017
.travis.yml Add h5py support to NDArrayIter (#6790) Jul 18, 2017
CMakeLists.txt fix cmake error/warning about empty file Jun 1, 2018
CODEOWNERS
CONTRIBUTORS.md
DISCLAIMER Add DISCLAIMER and lxn2 GPG keys (#7344) Aug 5, 2017
Jenkinsfile [EXPERIMENT] increasing timeout to 24hrs. (#8613) Nov 13, 2017
KEYS add code signing key (#8743) Nov 22, 2017
LICENSE V1.0.0.rc1 (#8896) Nov 30, 2017
MKL_README.md MKL compile update to remove full mkl pack dependency for blas=mkl (#… Feb 16, 2017
Makefile Make make lint compatible with python3 (don't call python2 explicitly… Nov 22, 2017
NEWS.md [v1.0.0branch only] Final Changes for 1.0- NEWS.d and README.md (#8781) Nov 23, 2017
NOTICE Issue #7748: Update the Copyright years in NOTICE file (#8046) Sep 26, 2017
README.md Update README.md Jun 5, 2019
appveyor.yml Add BLAS3 and LAPACK routines (#6538) Jun 13, 2017
prepare_mkl.sh upgrade MKL (#8378) Oct 26, 2017
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml [Merge into v1.0.0 ONLY][Copy of PR #8704] Prep1.0: bump the version … Nov 22, 2017

README.md

xnor enhanced neural nets // Hasso Plattner Institute

A fork of the deep learning framework mxnet to study and implement quantization and binarization in neural networks.

Our current efforts are focused on binarizing the inputs and weights of convolutional layers, enabling the use of performant bit operations instead of expensive matrix multiplications as described in:

News

  • Dec 06, 2018 - BMXNet-v2

  • Dec 22, 2017 - MXNet v1.0.0 and cuDNN

    • We are updating the underlying MXNet to version 1.0.0, see changes and release notes here.
    • cuDNN is now supported in the training of binary networks, speeding up the training process by about 2x

Setup

We use cmake to build the project. Make sure to install all the dependencies described here.

Adjust settings in cmake (build-type Release or Debug, configure CUDA, OpenBLAS or Atlas, OpenCV, OpenMP etc.)

$ git clone --recursive https://github.com/hpi-xnor/mxnet.git # remember to include the --recursive
$ mkdir build/Release && cd build/Release
$ cmake ../../ # if any error occurs, apply ccmake or cmake-gui to adjust the cmake config.
$ ccmake . # or GUI cmake
$ make -j `nproc`

Build the MXNet Python binding

Step 1 Install prerequisites - python, setup-tools, python-pip and numpy.

$ sudo apt-get install -y python-dev python-setuptools python-numpy python-pip

Step 2 Install the MXNet Python binding.

$ cd <mxnet-root>/python
$ pip install --upgrade pip
$ pip install -e .

If your mxnet python binding still not works, you can add the location of the libray to your LD_LIBRARY_PATH as well as the mxnet python folder to your PYTHONPATH:

$ export LD_LIBRARY_PATH=<mxnet-root>/build/Release
$ export PYTHONPATH=<mxnet-root>/python

Docker

There is a simple Dockerfile that you can use to ease the setup process. Once running, find mxnet at /mxnet and the build folder at /mxnet/release. (Be warned though, CUDA will not work inside the container so training process can be quite tedious)

$ cd <mxnet-root>/smd_hpi/tools/docker
$ docker build -t mxnet
$ docker run -t -i mxnet

You probably also want to map a folder to share files (trained models) inside docker (-v <absolute local path>:/shared).

Usage

Our main contribution are drop-in replacements for the Convolution, FullyConnected and Activation layers of mxnet called QConvoluion, QFullyConnected and QActivation.

These can be used when specifying a model. They extend the parameters of their corresponding original layer of mxnet with act_bit for activations and weight_bit for weights.

Quantization

Set the parameter act_bit and weight_bit to a value between 1 and 32 to quantize the activations and weights to that bit widths.

The quantization on bit widths ranging from 2 to 31 bit is available mainly for scientific purpose. There is no speed or memory gain (rather the opposite since there are conversion steps) as the quantized values are still stored in full precision float variables.

Binarization

To binarize the weights first set weight_bit=1 and act_bit=1. Then train your network (you can use CUDA/CuDNN). The resulting .params file will contain binary weights, but still store a single weight in one float.

To convert your trained and saved network, call the model converter with your .params file:

$ <mxnet-root>/build/Release/smd_hpi/tools/model_converter mnist-0001.params

This will generate a new .params and .json file with prepended binarized_. This model file will use only 1 bit of runtime memory and storage for every weight in the convolutional layers.

We have example python scripts to train and validate resnet18 (cifar10, imagenet) and lenet (mnist) neural networks with binarized layers.

There are example applications running on iOS and Android that can utilize binarized networks. Find them in the following repos:

Have a look at our source, tools and examples to find out more.

Citing BMXNet

Please cite BMXNet in your publications if it helps your research work:

@inproceedings{bmxnet,
 author = {Yang, Haojin and Fritzsche, Martin and Bartz, Christian and Meinel, Christoph},
 title = {BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet},
 booktitle = {Proceedings of the 2017 ACM on Multimedia Conference},
 series = {MM '17},
 year = {2017},
 isbn = {978-1-4503-4906-2},
 location = {Mountain View, California, USA},
 pages = {1209--1212},
 numpages = {4},
 url = {http://doi.acm.org/10.1145/3123266.3129393},
 doi = {10.1145/3123266.3129393},
 acmid = {3129393},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {binary neural networks, computer vision, machine learning, open source},
} 

Reference

You can’t perform that action at this time.