Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
31 lines (21 sloc) 1.4 KB

Distributed Deep Learning with ChainerMN

ChainerMN enables multi-node distributed deep learning with the following features:

  • Scalable --- it makes full use of the latest technologies such as NVIDIA NCCL and CUDA-Aware MPI,
  • Flexible --- even dynamic neural networks can be trained in parallel thanks to Chainer's flexibility, and
  • Easy --- minimal changes to existing user code are required.

This blog post provides our benchmark results using up to 128 GPUs.

ChainerMN can be used for both inner-node (i.e., multiple GPUs inside a node) and inter-node settings. For inter-node settings, we highly recommend to use high-speed interconnects such as InfiniBand.

ChainerMN examples are available on GitHub. These examples are based on the examples of Chainer and the differences are highlighted.

.. toctree::
   :maxdepth: 2


You can’t perform that action at this time.