Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on Theano or TensorFlow.
NumPy interface with mixed backend execution
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Flink and DataFlow
MXNet Julia Package - flexible and efficient deep learning in Julia
A common bricks library for building scalable and portable distributed machine learning.
CPP wrapper for MXNet interface
Notebooks for MXNet
Intermediate Computational Graph Representation for Deep Learning Systems
Matrix Shadow:Lightweight CPU/GPU Matrix and Tensor Template Library in C++/CUDA for (Deep) Machine Learning
The repo to host all the web data including images for documents in dmlc projects.
the homepage http://dmlc.ml
A lightweight parameter server interface
Drat Repository for DMLC R packages
Pre-trained Models of DMLC Project
redirect mxnet.readthedocs.io to mxnet.io
Reliable Allreduce and Broadcast Interface for distributed machine learning
Kernel Fusion and Runtime Compilation Based on NNVM
Caffe: a fast open framework for deep learning.
Benchmark speed and other issues internally, before push to deep-mark
XGBoost Julia Package
cache-friendly multithread matrix factorization
Sublinear memory optimization for deep learning, reduce GPU memory cost to train deeper nets
MXNet Tutorial for NVidia GTC 2016.
Distributed Factorization Machines
Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.