Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Clone or download
Pull request Compare This branch is 56 commits ahead, 3956 commits behind apache:master.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Create ISSUE_TEMPLATE.md Dec 8, 2016
R-package [R] fix message error on Jupyter notebook (apache#5854) Apr 15, 2017
Thrust @ d42fef5 Adding Thrust sub-module Apr 26, 2018
amalgamation Cleanup compilations & tests (apache#5309) Mar 9, 2017
cmake Add brew/osx search path for openblas (apache#5821) Apr 13, 2017
cpp-package [cpp-package] Add C++ basic tutorial and build instruction (apache#5971) Apr 25, 2017
cub-hip @ 93fb2e1 modified cub-hip branch(cubhip_mxnet)to fix compilation issues Apr 13, 2018
dmlc-core @ fcf831a Updating commits of submodules dmlc-core and nnvm Aug 17, 2017
docker Sync with python, Ftrl optimizer, new examples, (apache#6042) May 1, 2017
docs Fix tutorial links (apache#6090) May 4, 2017
example Remove hipLaunchParam, fix Idx's names and miopen changes Jul 4, 2017
include/mxnet hipification ,makefile and configuration file changes Sep 27, 2017
make fix compilation issues Oct 23, 2017
matlab Update Matlab demo script and get it working again with the HEAD of M… Feb 25, 2017
mshadow @ 226ac75 updated mshadow sub-module Apr 19, 2018
nnvm @ b279286 Updating commits of submodules dmlc-core and nnvm Aug 17, 2017
perl-package Hipification changes on src directory Jun 22, 2017
plugin Fixing compile issues, update to latest hipified mshadow Jul 21, 2017
ps-lite @ acdb698 Fix pslite (apache#5600) Mar 29, 2017
python Hipification changes on src directory Jun 22, 2017
scala-package Hipification changes on src directory Jun 22, 2017
setup-utils Update osx installation script to move ~/mxnet if present (apache#5974) May 1, 2017
src compilation issue fix atomicadd,half definition Sep 24, 2018
tests Hipification changes on src directory Jun 22, 2017
tools Convert caffe AbsVal to mx.symbol.abs in caffe converter (apache#5984) Apr 26, 2017
.gitignore Caffe without the patch, cpp-package fixed also with Caffe plugin (ap… Mar 31, 2017
.gitmodules Adding Thrust sub-module Apr 26, 2018
.travis.yml Added perl docs to the navbar and disabled slow linux travis tests. (a… Mar 28, 2017
CMakeLists.txt Improve the doc of pick + Update dmlc-core (apache#5946) Apr 25, 2017
CONTRIBUTORS.md option to not reset eval metric with Speedometer (apache#5827) Apr 24, 2017
Jenkinsfile improve infer_type error msg (apache#5841) Apr 19, 2017
LICENSE Update license year to range Jan 16, 2016
MKL_README.md MKL compile update to remove full mkl pack dependency for blas=mkl (a… Feb 16, 2017
Makefile Makefile and MIOpen changes. Temporary workaround(size calculation) t… May 3, 2018
NEWS.md Fixed docs (apache#5288) Mar 7, 2017
README.md Update README.md Dec 18, 2017
appveyor.yml Enable warning as error (apache#4451) Dec 31, 2016
hip-wrappers.cu fix runtime issues of mxnet in HIP/CUDA path Feb 23, 2018
hip-wrappers.h compilation issue fix atomicadd,half definition Sep 24, 2018
prepare_mkl.sh [CI] Add MKLML build and test (apache#5419) Mar 16, 2017
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (apache#4852) Mar 23, 2017
snapcraft.yaml Pick (apache#5553) Mar 24, 2017

README.md

for Deep Learning

Build Status Documentation Status GitHub license

banner

MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Installation Guide for HIP Port

Generic Installation Steps:

Install the system requirement following the ROCm’s installation guide

Installation Steps on HCC and NVCC PLATFORM

Prerequisites

Install CUDA 8.0 following the NVIDIA’s installation guide to setup MXNet with GPU support

Note: Make sure to add CUDA install path to LD_LIBRARY_PATH. Example - export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH

Building MXNet from source is a 2 step process.

  1. Build the MXNet core shared library, libmxnet.so, from the C++ sources.
  2. Build the language specific bindings. Example - Python bindings, Scala bindings.

Minimum Requirements

  1. GCC 4.8 or later to compile C++ 11.
  2. GNU Make

ROCm installation

Step 1: Add the ROCm apt repository For Debian based systems, like Ubuntu, configure the Debian ROCm repository as follows:

$ wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
$ sudo sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'

Step 2: Install or Update Next, update the apt-get repository list and install/update the rocm package: Warning: Before proceeding, make sure to completely uninstall any previous ROCm package

$ sudo apt-get update
$ sudo apt-get install rocm

Step 3: Install dependent libraries

$ sudo apt-get install rocm-device-libs rocblas rocm-libs 

For detailed installation steps refer the given installation link

Build the MXNet core shared library

Step 1 : Install build tools and git.

$ sudo apt-get update
$ sudo apt-get install -y build-essential git

Step 2 : Install OpenCV

MXNet uses OpenCV for efficient image loading and augmentation operations.

$ sudo apt-get install -y libopencv-dev

Step 3 : To build MXNet with Thrust

$ git clone --recursive https://github.com/ROCmSoftwarePlatform/Thrust

Add thrust path to the Makefile,

ifeq ($(HIP_PLATFORM), hcc)
               HIPINCLUDE += -I<Root path of Thrust>
               <Example: HIPINCLUDE += -I../Thrust>
endif

Step 4 : Download MXNet sources and build MXNet core shared library.

$ git clone --recursive https://github.com/ROCmSoftwarePlatform/mxnet
$ cd mxnet

To compile on HCC PLATFORM:

$ export HIP_PLATFORM=hcc
$ make -jn (n = no of cores)

To compile on NVCC PLATFORM:

$ export HIP_PLATFORM=nvcc
$ make -jn (n = no of cores) 

Note:

  1. USE_OPENCV, USE_BLAS, USE_CUDA, USE_CUDA_PATH are make file flags to set compilation options to use OpenCV, CUDA libraries. You can explore and use more compilation options in make/config.mk. Make sure to set USE_CUDA_PATH to right CUDA installation path. In most cases it is - /usr/local/cuda.
  2. MXNet uses rocBLAS, hcFFT, hcRNG and lapack libraries for accelerated numerical computations. cuDNN is not enabled as it is being migrated to Miopen.

Install the MXNet Python binding

Step 1 : Install prerequisites - python, setup-tools, python-pip and numpy.

$ sudo apt-get install -y python-dev python-setuptools python-numpy python-pip

Step 2 : Install the MXNet Python binding.

$ cd python
$ sudo python setup.py install 

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

© Contributors, 2015-2017. Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.