Skip to content
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Update PR Template (#9919) Mar 17, 2018
3rdparty Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
R-package add NAG optimizer to r api (#14023) Feb 1, 2019
amalgamation [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple (#14270) Mar 1, 2019
benchmark/python #14199: catch subprocess.CalledProcessError in get_gpus() (#14212) Mar 7, 2019
ci Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
cmake Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
contrib/clojure-package temporarily disable integ tests with a dependency on origami repo (#1… Mar 16, 2019
cpp-package print error message for mxnet::cpp::Operator::Invoke when failed (#14318 Mar 7, 2019
docker [MXNET-1093] Add python3 Docker images for each MXNet release (#12791) Mar 12, 2019
docs [Doc] Start the tutorials for MKL-DNN backend (#14202) Mar 19, 2019
example [MXNET-1291] solve pylint errors in examples with issue no.12205 (#13938 Mar 14, 2019
include Fix memory leak for size-zero ndarray (#14365) Mar 18, 2019
julia Julia: add binding for runtime feature detection (#13992) Mar 12, 2019
make compatibility with opencv4 (#14313) Mar 7, 2019
matlab fix website build (#14148) Feb 14, 2019
perl-package Add int8 data loader (#14123) Mar 4, 2019
plugin [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple (#14270) Mar 1, 2019
python Fix crashes on visualization (#14425) Mar 18, 2019
scala-package Fix relative difference scala (#14417) Mar 13, 2019
setup-utils Update expected result in osx python install script (#10842) May 10, 2018
src Fix memory leak for size-zero ndarray (#14365) Mar 18, 2019
tests Enforce determinism for backwards compatibility checker (#14463) Mar 19, 2019
tools #14199: catch subprocess.CalledProcessError in get_gpus() (#14212) Mar 7, 2019
.clang-tidy [MXNET-860] Remove std::moves that have no affect (#12730) Oct 4, 2018
.codecov.yml Enable C++ coverage (#12642) Sep 24, 2018
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore [MKLDNN] Enable signed int8 support for convolution. (#13697) Feb 10, 2019
.gitmodules [MXNET-703] TensorRT runtime integration (#11325) Aug 10, 2018
.mxnet_root CI docker revamp; Add Jetson, Raspberry and CentOS 7 build [MXNET-42]… Mar 9, 2018
.travis.yml Disable travis tests (#13137) Nov 6, 2018
CMakeLists.txt Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
CODEOWNERS Add CODEOWNERS for Julia package (#13872) Jan 15, 2019
CONTRIBUTORS.md add contributors from intel (#14455) Mar 18, 2019
DISCLAIMER Add DISCLAIMER and lxn2 GPG keys (#7344) Aug 5, 2017
KEYS add Qing's Key to master (#14180) Feb 15, 2019
LICENSE Add copyrights for third party licenses to license file (#13851) Jan 11, 2019
MKLDNN_README.md [Doc] Start the tutorials for MKL-DNN backend (#14202) Mar 19, 2019
Makefile fix Makefile (#14424) Mar 14, 2019
NEWS.md [Doc] Start the tutorials for MKL-DNN backend (#14202) Mar 19, 2019
NOTICE Update NOTICE (#14043) Feb 5, 2019
README.md [Doc] Start the tutorials for MKL-DNN backend (#14202) Mar 19, 2019
appveyor.yml License Adds - some more (#9559) Jan 26, 2018
dev_menu.py Add license check to dev_menu, docs build with docker (#14166) Feb 14, 2019
mkldnn.mk Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
readthedocs.yml Update LICENSE File with subcomponents (#13808) Jan 10, 2019
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml Update LICENSE File with subcomponents (#13808) Jan 10, 2019

README.md


Apache MXNet (incubating) for Deep Learning

Master Docs License
Build Status Documentation Status GitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Ask Questions

How to Contribute

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, Scala, C++, Java, Clojure, R and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.