Skip to content
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Adding standard files Mar 13, 2018
amazon-dsstne Modify amazon-dsstne/README.md Jan 2, 2019
benchmarks [docs] fixed code snippet in benchmarks doc May 12, 2018
docs/getting_started Added proper install target to Makefile. Moved main Makefile to proje… Sep 15, 2018
java Added brute force gpu knn and its JNI bindings Jan 4, 2019
python Remove python/embedding directory. Jan 2, 2019
samples Add amazon-dsstne/python subdirectory for the Python-C++ extension 'd… Nov 2, 2018
src/amazon/dsstne Added brute force gpu knn and its JNI bindings Jan 4, 2019
talks Adding some (relatively) recent talks on DSSTNE Jul 29, 2017
tst Fixed broken unit test in TestNNDataSet introduced when isIndexed par… Oct 5, 2018
.dockerignore
.gitignore
.travis.yml Add clang 3.8 support to Travis CI build Sep 10, 2016
CODE_OF_CONDUCT.md Adding standard files Mar 13, 2018
CONTRIBUTING.md Adding standard files Mar 13, 2018
Dockerfile Revert "Use CMake consistently instead of a mixture of plain Make and… Jun 7, 2018
FAQ.md fixed broken markdown header in FAQ May 11, 2018
LICENSE
Makefile Added makefile targets for building enigne, runtime, and java separately Nov 27, 2018
NOTICE Adding NOTICE file May 11, 2016
README.md fix header Apr 5, 2017
Singularity Singularity: Adding singularity file for dsstne Aug 29, 2017

README.md

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine

DSSTNE (pronounced "Destiny") is an open source software library for training and deploying recommendation models with sparse inputs, fully connected hidden layers, and sparse outputs. Models with weight matrices that are too large for a single GPU can still be trained on a single host. DSSTNE has been used at Amazon to generate personalized product recommendations for our customers at Amazon's scale. It is designed for production deployment of real-world applications which need to emphasize speed and scale over experimental flexibility.

DSSTNE was built with a number of features for production recommendation workloads:

  • Multi-GPU Scale: Training and prediction both scale out to use multiple GPUs, spreading out computation and storage in a model-parallel fashion for each layer.
  • Large Layers: Model-parallel scaling enables larger networks than are possible with a single GPU.
  • Sparse Data: DSSTNE is optimized for fast performance on sparse datasets, common in recommendation problems. Custom GPU kernels perform sparse computation on the GPU, without filling in lots of zeroes.

Benchmarks

Scaling up

License

License

Setup

  • Follow Setup for step by step instructions on installing and setting up DSSTNE

User Guide

  • Check User Guide for detailed information about the features in DSSTNE

Examples

  • Check Examples to start trying your first Neural Network Modeling using DSSTNE

Q&A

FAQ

You can’t perform that action at this time.