C++ Shell CMake Python Other
Clone or download
beniz Merge pull request #470 from fantes/save_best_iter
Save best value of metric and iteration number
Latest commit 184623e Jul 18, 2018
Failed to load latest commit information.
.travis pinning spdlog to v0.17.0 for travis Jul 11, 2018
clients/python Merge pull request #401 from xoolive/pr-base64 Apr 30, 2018
cmake Fixed gpu build, took fix from jolibrain/deepdetect#426 May 29, 2018
datasets/imagenet added default corresp file for imagenet ilsvrc12 May 12, 2015
demo Merge pull request #378 from dgtlmoon/python-demo-fixes Feb 14, 2018
docker fixed dockerfiles with simsearch activated Jul 10, 2018
examples fixed mnist example training prototxt Jul 4, 2017
m4 initial commit Nov 5, 2014
main Merge branch 'master' of github.com:jolibrain/deepdetect into dlib Jun 11, 2018
patches Forced pytorch's eigen version Jul 13, 2018
src Merge pull request #470 from fantes/save_best_iter Jul 18, 2018
templates/caffe Merge pull request #458 from fantes/mobilenetv2_ssd_template Jul 11, 2018
tests Merge pull request #469 from fantes/delete_repo Jul 17, 2018
.gitignore Adding -DUSE_CAFFE2 flag and caffe2's backend Jun 7, 2018
.travis.yml deactive travis cuda build May 23, 2018
AUTHORS updated authors May 26, 2015
CMakeLists.txt correct cmakelists Jul 17, 2018
COPYING listed Caffe license May 29, 2015
ChangeLog initial commit Nov 5, 2014
INSTALL initial commit Nov 5, 2014
ISSUE_TEMPLATE.md Added issue template in the root directory. Feb 19, 2017
Jenkinsfile Added Caffe2 to jenkins test Jun 26, 2018
NEWS initial commit Nov 5, 2014
README initial commit Nov 5, 2014
README.md Merge branch 'master' of github.com:jolibrain/deepdetect into dlib Jun 14, 2018
aclocal.m4 initial commit Nov 5, 2014
dd_config.h.in setting version, branch and commit with cmake into code Dec 15, 2014
ltmain.sh initial commit Nov 5, 2014


DeepDetect : Open Source Deep Learning Server & API

Join the chat at https://gitter.im/beniz/deepdetect Build Status

DeepDetect (http://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state of the art machine learning easy to work with and integrate into existing applications.

DeepDetect relies on external machine learning libraries through a very generic and flexible API. At the moment it has support for:

  • the deep learning library Caffe
  • distributed gradient boosting library XGBoost
  • the deep learning and other usages library Tensorflow
  • clustering with T-SNE

Machine Learning functionalities per library (current):

Training Prediction Classification Object Detection Segmentation Regression Autoencoder OCR / Seq2Seq
Caffe Y Y Y Y Y Y Y Y
Caffe2 N Y N N N N N N
XGBoost Y Y Y N N Y N/A N
Tensorflow N Y Y N N N N N
Dlib N Y Y Y N N N N

GPU support per library

Training Prediction
Caffe Y Y
Caffe2 N Y
XGBoost Y Y
Tensorflow N Y
Dlib N Y

Input data support per library (current):

CSV SVM Text words Text characters Images
Caffe Y Y Y Y Y
Caffe2 N N N N Y
XGBoost Y Y Y N N
Tensorflow N N N N Y
Dlib N N N N Y
(*) more input support for T-SNE is pending

Main functionalities

DeepDetect implements support for supervised and unsupervised deep learning of images, text and other data, with focus on simplicity and ease of use, test and connection into existing applications. It supports classification, object detection, segmentation, regression, autoencoders, ...


Please join either the community on Gitter or on IRC Freenode #deepdetect, where we help users get through with installation, API, neural nets and connection to external applications.

Supported Platforms

The reference platforms with support are Ubuntu 14.04 LTS and Ubuntu 16.04 LTS.

Supported images that come with pre-trained image classification deep (residual) neural nets:


See https://github.com/jolibrain/dd_performances for a report on performances on NVidia Desktop and embedded GPUs, along with Raspberry Pi 3.


Setup an image classifier API service in a few minutes: http://www.deepdetect.com/tutorials/imagenet-classifier/


List of tutorials, training from text, data and images, setup of prediction services, and export to external software (e.g. ElasticSearch): http://www.deepdetect.com/tutorials/tutorials/

Features and Documentation

Current features include:

  • high-level API for machine learning and deep learning
  • support for Caffe, Tensorflow, XGBoost and T-SNE
  • classification, regression, autoencoders, object detection, segmentation
  • JSON communication format
  • remote Python client library
  • dedicated server with support for asynchronous training calls
  • high performances, benefit from multicore CPU and GPU
  • built-in similarity search via neural embeddings
  • connector to handle large collections of images with on-the-fly data augmentation (e.g. rotations, mirroring)
  • connector to handle CSV files with preprocessing capabilities
  • connector to handle text files, sentences, and character-based models
  • connector to handle SVM file format for sparse data
  • range of built-in model assessment measures (e.g. F1, multiclass log loss, ...)
  • no database dependency and sync, all information and model parameters organized and available from the filesystem
  • flexible template output format to simplify connection to external applications
  • templates for the most useful neural architectures (e.g. Googlenet, Alexnet, ResNet, convnet, character-based convnet, mlp, logistic regression)
  • support for sparse features and computations on both GPU and CPU
  • built-in similarity indexing and search of predicted features and probability distributions
  • Log DeepDetect training metrics via Tensorboard with dd_board
  • C++, gcc >= 4.8 or clang with support for C++11 (there are issues with Clang + Boost)
  • eigen for all matrix operations;
  • glog for logging events and debug;
  • gflags for command line parsing;
  • OpenCV >= 2.4
  • cppnetlib
  • Boost
  • curl
  • curlpp
  • utfcpp
  • gtest for unit testing (optional);
Caffe Dependencies
  • CUDA 9 or 8 is recommended for GPU mode.
  • BLAS via ATLAS, MKL, or OpenBLAS.
  • protobuf
  • IO libraries hdf5, leveldb, snappy, lmdb
XGBoost Dependencies

None outside of C++ compiler and make

  • CUDA 8 is recommended for GPU mode.

Tensorflow Dependencies

Dlib Dependencies

  • CUDA 8 and cuDNN 7 for GPU mode
Caffe version

By default DeepDetect automatically relies on a modified version of Caffe, https://github.com/beniz/caffe/tree/master This version includes many improvements over the original Caffe, such as sparse input data support, exception handling, class weights, object detection, segmentation, and various additional losses and layers.


The code makes use of C++ policy design for modularity, performance and putting the maximum burden on the checks at compile time. The implementation uses many features from C++11.

  • Image classification Web interface: HTML and javascript classification image demo in demo/imgdetect

  • Image similarity search: Python script for indexing and searching images is in demo/imgsearch

  • Image object detection: Python script for object detection within images is in demo/objdetect

  • Image segmentation: Python script for image segmentation is in demo/segmentation

Caffe Tensorflow Source Top-1 Accuracy (ImageNet)
AlexNet Y N BVLC 57.1%
SqueezeNet Y N DeepScale 59.5%
Inception v1 / GoogleNet Y Y BVLC / Google 67.9%
Inception v2 N Y Google 72.2%
Inception v3 N Y Google 76.9%
Inception v4 N Y Google 80.2%
ResNet 50 Y Y MSR 75.3%
ResNet 101 Y Y MSR 76.4%
ResNet 152 Y Y MSR 77%
Inception-ResNet-v2 N Y Google 79.79%
VGG-16 Y Y Oxford 70.5%
VGG-19 Y Y Oxford 71.3%
ResNext 50 Y N https://github.com/terrychenism/ResNeXt 76.9%
ResNext 101 Y N https://github.com/terrychenism/ResNeXt 77.9%
ResNext 152 Y N https://github.com/terrychenism/ResNeXt 78.7%
DenseNet-121 Y N https://github.com/shicai/DenseNet-Caffe 74.9%
DenseNet-161 Y N https://github.com/shicai/DenseNet-Caffe 77.6%
DenseNet-169 Y N https://github.com/shicai/DenseNet-Caffe 76.1%
DenseNet-201 Y N https://github.com/shicai/DenseNet-Caffe 77.3%
SE-BN-Inception Y N https://github.com/hujie-frank/SENet 76.38%
SE-ResNet-50 Y N https://github.com/hujie-frank/SENet 77.63%
SE-ResNet-101 Y N https://github.com/hujie-frank/SENet 78.25%
SE-ResNet-152 Y N https://github.com/hujie-frank/SENet 78.66%
SE-ResNext-50 Y N https://github.com/hujie-frank/SENet 79.03%
SE-ResNext-101 Y N https://github.com/hujie-frank/SENet 80.19%
SENet Y N https://github.com/hujie-frank/SENet 81.32%
VOC0712 (object detection) Y N https://github.com/weiliu89/caffe/tree/ssd 71.2 mAP
InceptionBN-21k Y N https://github.com/pertusa/InceptionBN-21K-for-Caffe 41.9%
Inception v3 5K N Y https://github.com/openimages/dataset
5-point Face Landmarking Model (face detection) N N http://blog.dlib.net/2017/09/fast-multiclass-object-detection-in.html
Front/Rear vehicle detection (object detection) N N http://blog.dlib.net/2017/09/fast-multiclass-object-detection-in.html

More models:


DeepDetect comes with a built-in system of neural network templates (Caffe backend only at the moment). This allows the creation of custom networks based on recognized architectures, for images, text and data, and with much simplicity.


  • specify template to use, from mlp, convnet and resnet
  • specify the architecture with the layers parameter:
    • for mlp, e.g. [300,100,10]
    • for convnet, e.g. ["1CR64","1CR128","2CR256","1024","512"], where the main pattern isxCRywhereyis the number of outputs (feature maps),CRstands for Convolution + Activation (withreluas default), andxspecifies the number of chainedCRblocks without pooling. Pooling is applied between allxCRy`
  • for resnets:
    • with images, e.g. ["Res50"] where the main pattern is ResX with X the depth of the Resnet
    • with character-based models (text), use the xCRy pattern of convnets instead, with the main difference that x now specifies the number of chained CR blocks within a resnet block
    • for Resnets applied to CSV or SVM (sparse data), use the mlp pattern. In this latter case, at the moment, the resnet is built with blocks made of two layers for each specified layer after the first one. Here is an example: [300,100,10] means that a first hidden layer of size 300 is applied followed by a resnet block made of two 100 fully connected layer, and another block of two 10 fully connected layers. This is subjected to future changes and more control.


DeepDetect is designed and implemented by Emmanuel Benazera beniz@droidnik.fr.


Below are instructions for Ubuntu 14.04 LTS. For other Linux and Unix systems, steps may differ, CUDA, Caffe and other libraries may prove difficult to setup. If you are building on 16.04 LTS, look at https://github.com/beniz/deepdetect/issues/126 that tells you how to proceed.

Beware of dependencies, typically on Debian/Ubuntu Linux, do:

sudo apt-get install build-essential libgoogle-glog-dev libgflags-dev libeigen3-dev libopencv-dev libcppnetlib-dev libboost-dev libboost-iostreams-dev libcurlpp-dev libcurl4-openssl-dev protobuf-compiler libopenblas-dev libhdf5-dev libprotobuf-dev libleveldb-dev libsnappy-dev liblmdb-dev libutfcpp-dev cmake libgoogle-perftools-dev unzip python-setuptools python-dev libspdlog-dev

Default build with Caffe

For compiling along with Caffe:

mkdir build
cd build
cmake ..

If you are building for one or more GPUs, you may need to add CUDA to your ld path:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64

If you would like to build with cuDNN, your cmake line should be:

cmake .. -DUSE_CUDNN=ON

To target the build of underlying Caffe to a specific CUDA architecture (e.g. Pascal), you can use:

cmake .. -DCUDA_ARCH="-gencode arch=compute_61,code=sm_61"

If you would like to build on NVidia Jetson TX1:

cmake .. -DCUDA_ARCH="-gencode arch=compute_53,code=sm_53" -DUSE_CUDNN=ON -DJETSON=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF

On Jetson TX2, use -DCUDA_ARCH="-gencode arch=compute_62,code=sm_62"

If you would like a CPU only build, use:

cmake .. -DUSE_CPU_ONLY=ON

If you would like to constrain Caffe to CPU only, use:


Build with XGBoost support

If you would like to build with XGBoost, include the -DUSE_XGBOOST=ON parameter to cmake:


If you would like to build the GPU support for XGBoost (experimental from DMLC), use the -DUSE_XGBOOST_GPU=ON parameter to cmake:


Build with Tensorflow support

First you must install Bazel and Cmake with version > 3.

And other dependencies:

sudo apt-get install python-numpy swig python-dev python-wheel unzip

If you would like to build with Tensorflow, include the -DUSE_TF=ON paramter to cmake:


If you would like to constrain Tensorflow to CPU, use:


You can combine with XGBoost support with:


Build with T-SNE support

Simply specify the option via cmake command line:

cmake .. -DUSE_TSNE=ON

Build with Dlib support

Specify the following option via cmake:

cmake .. -DUSE_DLIB=ON

This will automatically build with GPU support if possible. Note: this will also enable cuDNN if available by default.

If you would like to constrain Dlib to CPU, use:


Build with Caffe2 support

Specify the option via cmake:

cmake .. -DUSE_CAFFE2=ON

Build with similarity search support

Specify the following option via cmake:


Build with logs output into syslog

Specify the following option via cmake:


Run tests

Note: running tests requires the automated download of ~75Mb of datasets, and computations may take around thirty minutes on a CPU-only machines.

To prepare for tests, compile with:


Run tests with:


Start the server

cd build/main

DeepDetect [ commit 73d4e638498d51254862572fe577a21ab8de2ef1 ]
Running DeepDetect HTTP server on localhost:8080

Main options are:

  • -host to select which host to run on, default is localhost, use to listen on all interfaces
  • -port to select which port to listen to, default is 8080
  • -nthreads to select the number of HTTP threads, default is 10

To see all options, do:

./dede --help

Pure command line JSON API

To use deepdetect without the client/server architecture while passing the exact same JSON messages from the API:

./dede --jsonapi 1 <other options>

where <other options> stands for the command line parameters from the command line JSON API:

-info (/info JSON call) type: bool default: false
-service_create (/service/service_name call JSON string) type: string default: ""
-service_delete (/service/service_name DELETE call JSON string) type: string default: ""
-service_name (service name string for JSON call /service/service_name) type: string default: ""
-service_predict (/predict POST call JSON string) type: string default: ""
-service_train (/train POST call JSON string) type: string default: ""
-service_train_delete (/train DELETE call JSON string) type: string default: ""
-service_train_status (/train GET call JSON string) type: string default: ""

The options above can be obtained from running

./dede --help

Example of creating a service then listing it:

./dede --jsonapi 1 --service_name test --service_create '{"mllib":"caffe","description":"classification service","type":"supervised","parameters":{"input":{"connector":"image"},"mllib":{"template":"googlenet","nclasses":10}},"model":{"templates":"/path/to/deepdetect/templates/caffe/","repository":"/path/to/model/"}}'

Note that in command line mode the --service_xxx calls are executed sequentially, and synchronously. Also note the logs are those from the server, the JSON API response is not available in pure command line mode.

Run examples

See tutorials from http://www.deepdetect.com/tutorials/tutorials/