Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++ Python Jupyter Notebook Scala R Cuda Other
Latest commit 571b9a4 Jan 18, 2017 @piiswrong piiswrong committed on GitHub Update
Failed to load latest commit information.
.github Create Dec 8, 2016
R-package [Release] v0.9.2 MinPy compatible release Jan 16, 2017
amalgamation Fix amalgamation scipt. (#4135) Dec 7, 2016
cmake Modifying FindMKL to respect i386 and allow ILP64 (#3762) Nov 9, 2016
dmlc-core @ 749e570 [OP] Topk and arange + Update submodules (#4565) Jan 7, 2017
docker Add CUDA 7.5 and 8.0 Dockerfiles (#4114) Dec 7, 2016
docs Reorganzie get_started page content (#4703) Jan 18, 2017
example Update Jan 19, 2017
include/mxnet Copy3 (#4680) Jan 17, 2017
make [nnpack] update && support more op (#4519) Jan 8, 2017
matlab spelling/typo fixes (#3815) Nov 14, 2016
mshadow @ ccab3b9 [OP] Topk and arange + Update submodules (#4565) Jan 7, 2017
nnvm @ d6a6e0e [EXEC] Specify external memory directly in memory planning (#4558) Jan 7, 2017
plugin fix warpctc plugin work with new version warpctc (#4530) Jan 7, 2017
ps-lite @ 5ac4af0 update ps-lite (#4461) Jan 1, 2017
python [Release] v0.9.2 MinPy compatible release Jan 16, 2017
scala-package bug in scala package: scalar times symbol (#4496) Jan 3, 2017
setup-utils Adding Mac quick install script for mxnet with Python (#4642) Jan 12, 2017
src add new OP take (#4715) Jan 18, 2017
tests add new OP take (#4715) Jan 18, 2017
tools Add prelu layer support for caffe convert tool (#4277) Jan 11, 2017
.gitignore close #4527 (#4549) Jan 5, 2017
.gitmodules NNVM Refactor (#3194) Dec 29, 2016
.travis.yml move julia test to jenkins (#3769) Nov 8, 2016
CMakeLists.txt Minor CMake build changes and fix a couple of signed/unsigned warnings ( Jan 12, 2017 Fix memory leak bug in Monitor (#4269) Dec 18, 2016
LICENSE Update license year to range Jan 16, 2016 MKL feature enhance (#4128) Dec 7, 2016
Makefile [nnpack] update && support more op (#4519) Jan 8, 2017 [EXEC] Specify external memory directly in memory planning (#4558) Jan 7, 2017 update docs && fix typos (#4624) Jan 10, 2017
appveyor.yml Enable warning as error (#4451) Dec 31, 2016 MKL feature enhance (#4128) Dec 7, 2016
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016

for Deep Learning

Build Status Documentation Status GitHub license


MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavours of symbolic programming and imperative programming to maximize efficiency and productivity. In its core, a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning system, and interesting insights of DL systems for hackers.

Join the chat at

What's New



  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match good flavours of programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for python, R, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs


© Contributors, 2015-2017. Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015


MXNet is initiated and designed in collaboration by the authors of cxxnet, minerva and purine2. The project reflects what we have learnt from the past projects. It combines important flavours of the existing projects for efficiency, flexibility and memory efficiency.