A flexible framework of neural networks for deep learning
Pull request Compare This branch is 11 commits ahead, 9450 commits behind chainer:master.
Latest commit 89bd6eb Aug 10, 2018


IntelChainer: Optimized-Chainer for Intel Architectures

GitHub license travis Read the Docs

Chainer* is a Python*-based deep learning framework aiming at flexibility and intuition. It provides automatic differentiation APIs based on the define-by-run approach (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It supports various network architectures including feed-forward nets, convnets, recurrent nets and recursive nets. It also supports per-batch architectures. Forward computation can include any control flow statements of Python without lacking the ability of backpropagation. It makes code intuitive and easy to debug. Intel® optimization for Chainer, is currently integrated with the latest release of Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) 2017 optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX) and Intel® Advanced Vector Extensions 512 (Intel®AVX-512) instructions which are supported in Intel® Xeon® and Intel® Xeon Phi™ processors.

Recommended Environments

We recommend these Linux distributions.

  • Ubuntu 16.04 LTS 64bit
  • CentOS 7 64bit

The following versions of Python can be used:

  • 2.7.10+, 3.5.2+, and 3.6.0+

Above recommended environments are tested. We cannot guarantee that Intel® optimization for Chainer works on other environments including Windows* and macOS*, even if Intel optimization for Chainer looks to be running correctly.

Install Chainer from source

You can use setup.py to install Chainer from the tarball:

$ python setup.py install

ideep4py has been split from Chainer, so you also need to install ideep4py:

$ pip install ideep4y

Use pip to uninstall chainer and ideep4py:

$ pip uninstall chainer ideep4py

Training Examples

Training test with mnist dataset:

$ cd examples/mnist
$ python train_mnist.py -g -1

Training test with cifar datasets:

  • run the CIFAR-100 dataset:
$ cd examples/cifar
$ python train_cifar.py –g -1 --dataset='cifar100'
  • run the CIFAR-10 dataset:
$ cd examples/cifar
$ python train_cifar.py –g -1 --dataset='cifar10'

Single Node Performance Test Configurations

For Single Node Performance Test Configurations, please refer to following wiki:



MIT License (see LICENSE file).


Tokui, S., Oono, K., Hido, S. and Clayton, J., Chainer: a Next-Generation Open Source Framework for Deep Learning, Proceedings of Workshop on Machine Learning Systems(LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), (2015) URL, BibTex

More Information