Skip to content
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github
3rdparty License Googletest and Appendix (#14687) Apr 19, 2019
R-package [numpy] Support zero-dim and zero-size tensors in MXNet (#14661) Apr 16, 2019
amalgamation [DEP] upgrade dmlc-core (#14510) Apr 13, 2019
benchmark/python #14199: catch subprocess.CalledProcessError in get_gpus() (#14212) Mar 7, 2019
ci Reenable TensorRT step (#14654) Apr 21, 2019
cmake Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
contrib/clojure-package [docstring] improve docstring and indentation in `module.clj` (#14705) Apr 16, 2019
cpp-package Update inception_inference.cpp (#14674) Apr 16, 2019
docker [MXNET-1093] Add python3 Docker images for each MXNet release (#12791) Mar 12, 2019
docs fix pi instructions (#14746) Apr 21, 2019
example Support SSD f32/int8 evaluation on COCO dataset (#14646) Apr 10, 2019
include
julia Fixes for CI downloads (#14504) Mar 26, 2019
make Disable USE_GPERFTOOLS (#14711) Apr 17, 2019
matlab Fixes for CI downloads (#14504) Mar 26, 2019
perl-package [numpy] Support zero-dim and zero-size tensors in MXNet (#14661) Apr 16, 2019
plugin [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple (#14270) Mar 1, 2019
python Set idx2name for Optimizer object (#14703) Apr 19, 2019
scala-package
setup-utils Update expected result in osx python install script (#10842) May 10, 2018
src [contrib][op] fix MultiBoxPrior confusing results if first ratio is n… Apr 18, 2019
tests [MXNET-1377] Add static-dependencies licenses (#14726) Apr 19, 2019
tools [MXNET-1377] Add static-dependencies licenses (#14726) Apr 19, 2019
.clang-tidy [MXNET-860] Remove std::moves that have no affect (#12730) Oct 4, 2018
.codecov.yml Enable C++ coverage (#12642) Sep 24, 2018
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore [MKLDNN] Enable signed int8 support for convolution. (#13697) Feb 10, 2019
.gitmodules Change CUB submodule to track Nvidia CUB project. (#13322) Mar 31, 2019
.mxnet_root CI docker revamp; Add Jetson, Raspberry and CentOS 7 build [MXNET-42]… Mar 9, 2018
.travis.yml Disable travis tests (#13137) Nov 6, 2018
CMakeLists.txt
CODEOWNERS Add CODEOWNERS for Julia package (#13872) Jan 15, 2019
CONTRIBUTORS.md License Googletest and Appendix (#14687) Apr 19, 2019
DISCLAIMER
KEYS add Qing's Key to master (#14180) Feb 15, 2019
LICENSE License Googletest and Appendix (#14687) Apr 19, 2019
MKLDNN_README.md [Doc] Start the tutorials for MKL-DNN backend (#14202) Mar 19, 2019
Makefile
NEWS.md
NOTICE Update NOTICE (#14043) Feb 5, 2019
README.md Remove unnecessary "also" in README.md (#14543) Mar 28, 2019
appveyor.yml
dev_menu.py Use ubuntu_rat container for rat check (#14678) Apr 16, 2019
mkldnn.mk Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668 Mar 17, 2019
readthedocs.yml Update LICENSE File with subcomponents (#13808) Jan 10, 2019
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml Update LICENSE File with subcomponents (#13808) Jan 10, 2019

README.md


Apache MXNet (incubating) for Deep Learning

Master Docs License
Build Status Documentation Status GitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is more than a deep learning project. It is a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Ask Questions

How to Contribute

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, Scala, C++, Java, Clojure, R and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.

You can’t perform that action at this time.