Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
thomelane and sandeep-krishnamurthy Updated Symbol tutorial with Gluon (#12190)
* Corrections to profiling tutorial

Corrected a race condition with stopping profiling. Added mx.nd.waitall to ensure all operations have completed, including GPU operations that might otherwise be missing.

Also added alternative code for context selection GPU vs CPU, that had error before on machines with nvidia-smi.

* Updated tutorial to include references to Gluon.
Latest commit 7797584 Sep 19, 2018
Permalink
Failed to load latest commit information.
.github Update PR Template (#9919) Mar 17, 2018
3rdparty Updated tvm submodule head (#12448) Sep 3, 2018
R-package Updating news, readme files and bumping master version to 1.3.1 (#12525) Sep 17, 2018
amalgamation Fix typo. (#12211) Aug 17, 2018
benchmark/python [MXNET-684] Add `cond` operator (#11760) Jul 24, 2018
ci Pinned dockcross to a tag with fixed ABI for RPi (#12588) Sep 19, 2018
cmake Add path_suffixes for finding cudnn.lib (#11513) Jul 6, 2018
contrib/clojure-package Updating news, readme files and bumping master version to 1.3.1 (#12525) Sep 17, 2018
cpp-package Add GetName function in Symbol class for cpp pack (#12076) Aug 9, 2018
docker [MXNET-703] Add TensorRT runtime Dockerfile (#12549) Sep 18, 2018
docs Updated Symbol tutorial with Gluon (#12190) Sep 20, 2018
example [MXNET-910] Multithreading inference. (#12456) Sep 19, 2018
include/mxnet Tweaked the copy in c_predict_api.h (#12600) Sep 20, 2018
make [MXNET-472] ccache for docker builds (#11151) Jun 8, 2018
matlab Adding Apache Header to .m, .cfg, R and .mk files (#9499) Jan 22, 2018
perl-package MXNET-776 [Perl] Better documentation/bug fixes. (#12038) Aug 8, 2018
plugin add input argument in warpctc layer (#11167) Jun 6, 2018
python fix test_activation by lowering threshold + validate eps for check_nu… Sep 20, 2018
scala-package Updating news, readme files and bumping master version to 1.3.1 (#12525) Sep 17, 2018
setup-utils Update expected result in osx python install script (#10842) May 10, 2018
src Fix typo in profiler.h (#12599) Sep 20, 2018
tests fix test_activation by lowering threshold + validate eps for check_nu… Sep 20, 2018
tools Update PyPI version number (#11773) Aug 28, 2018
.clang-tidy [MXNET-860] Reduce redundant copies, check for regressions with clang… Sep 18, 2018
.codecov.yml [MXNET-561] Add test coverage reporting (#11344) Jun 21, 2018
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore Clojure Contrib Package (#11205) Jul 1, 2018
.gitmodules [MXNET-703] TensorRT runtime integration (#11325) Aug 10, 2018
.mxnet_root CI docker revamp; Add Jetson, Raspberry and CentOS 7 build [MXNET-42]… Mar 9, 2018
.travis.yml [MXNET-908] Enable python tests in Travis (#12550) Sep 19, 2018
CMakeLists.txt [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370) Sep 19, 2018
CODEOWNERS add gigasquid (Carin Meier) to the Clojure language binding (#12198) Aug 21, 2018
CONTRIBUTORS.md [MXNET-882] Support for N-d arrays added to diag op. (#12430) Sep 18, 2018
DISCLAIMER Add DISCLAIMER and lxn2 GPG keys (#7344) Aug 5, 2017
Jenkinsfile [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370) Sep 19, 2018
KEYS upload key (#12301) Aug 23, 2018
LICENSE [MXNET-533] MXNet-ONNX export (#11213) Jun 25, 2018
MKLDNN_README.md Add linux and macos MKLDNN Building Instruction (#11049) Jul 30, 2018
Makefile [MXNET-851] Test coverage metrics for R-package (#12391) Sep 13, 2018
NEWS.md Updating news, readme files and bumping master version to 1.3.1 (#12525) Sep 17, 2018
NOTICE Update NOTICE (#9706) Feb 6, 2018
README.md Updating news, readme files and bumping master version to 1.3.1 (#12525) Sep 17, 2018
appveyor.yml License Adds - some more (#9559) Jan 26, 2018
mkldnn.mk fix. (#11471) Jun 28, 2018
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml Updating news, readme files and bumping master version to 1.3.1 (#12525) Sep 17, 2018

README.md


Apache MXNet (incubating) for Deep Learning

Master Docs License
Build Status Documentation Status GitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Ask Questions

How to Contribute

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, Scala, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.