Skip to content
DFX Python End to End Demo - This is a demonstration of the complete workflow of DFX SDK and API functionalities.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore Modify gitignore Sep 9, 2019

DFX Python End to End Demo

This is a demonstration of the complete workflow of DFX SDK and API functionalities.

This end-to-end demo includes the following steps:

  • Setting up a DFX API SimpleClient (which handles register license, create and log in user)
  • Creating a new measurement
  • Extracting face data from a video file or webcam input
  • Generating payload chunks from a study file
  • Sending payload chunks to API server
  • Subscribing to results and receiving measurement results from the API server
  • Decoding results and displaying them
  • Optional: Saving the payloads and results
  • Optional: Saving the visage facepoints of the input video

This document outlines how to set up and use the dfx end-to-end demo.

The basic workflow is illustrated:


For more details, please read

Setup Requirements

Python 3.6 or above is required.


Please ensure that Python and CMake are added to PATH.


You need to ensure you have at least Python 3.6, it's development headers and venv installed. On Ubuntu 18.04:

sudo apt-get install python3.6 python3.6-venv python3.6-dev

Note: On Ubuntu 16.04, you may need to use a PPA to install Python 3.6.

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.6 python3.6-venv python3.6-dev

In addition, install the following packages.

sudo apt-get install git
sudo apt-get install build-essential cmake libopenblas-dev liblapack-dev  # Needed for dlib

Quick Start

  1. Make sure you have all dependencies above set up.

  2. Create a new Python virtual environment and activate it.

    # On Ubuntu
    python3.6 -m venv dfx-demo
    source dfx-demo/bin/activate
    REM On Windows
    python -m venv dfx-demo
  3. Install the following packages in the virtual environment.

     pip install asyncio opencv-python
     pip install dlib  # This may take a long time to finish

    Note: On Ubuntu 16.04, installing dlib in an existing Python virtual environment that was created before Python 3.6 was installed will create dependency issues due to conflicts between python3-dev and python3.6-dev.

  4. Install the DFX SDK (libdfx) after downloading the appropriate .whl package for your OS from the DeepAffex website

    pip install ./libdfx-{versionspecificinfo}.whl
  5. Install the DFX API SimpleClient library.

    pip install git+
  6. Clone this repository using git and navigate to the cloned folder

    git clone
    cd dfx-e2e-demo-python
  7. Download the Dlib face landmarks model and unzip it to the /res folder in the cloned repo.

  8. Download the DFX example data and unzip it. It contains the example video and face-tracking data.

  9. Obtain a valid DFX license key, study ID and study configuration file from NuraLogix. The license key and study ID can be obtained by logging into DFX Dashboard. A sample study configuration file (.dat) can be obtained by downloading the DFX C++ Windows/macOS/Linux SDK from here, and is located in dfxsdk/res/models.

  10. Run the demo. To see usage:

    $python -h
    usage: [-h] [-v] [--send_method {REST,rest,websocket,ws}]
                     [--measurement_mode {discrete,streaming,batch,video}]
                     [--server {qa,dev,prod,prod-cn}]
                     [--chunklength CHUNKLENGTH] [--videolength VIDEOLENGTH]
                     [-r RESOLUTION] [--face_detect {brute,fast,smart}]
    DFX SDK Python example program
    positional arguments:
    study                 Path of study file
    imageSrc              Path of video file or numeric ID of web camera
    license_key           DFX API license key
    study_id              DFX API study ID
    email                 User email
    password              User password
    optional arguments:
    -h, --help            show this help message and exit
    -v, --version         show program's version number and exit
    --send_method {REST,rest,websocket,ws}
                            Method for adding/sending data to measurement
    --measurement_mode {discrete,streaming,batch,video}
                            Measurement mode
    --server {qa,dev,prod,prod-cn}
                            Name of server to use
    --chunklength CHUNKLENGTH
                            Length of each video chunk, must be between 5 and 30
    --videolength VIDEOLENGTH
                            Total length of video
    -r RESOLUTION, --resolution RESOLUTION
                            Resolution to open camera e.g. 1280x720
    --face_detect {brute,fast,smart}
                            Face detector caching strategy (smart by default)
    --faces FACES         Path of pre-tracked face points file
    --save_chunks_folder SAVE_CHUNKS_FOLDER
                            Folder to save chunks
    --save_results_folder SAVE_RESULTS_FOLDER
                            Folder to save results
    --save_facepoints     Save the facepoints into a json file; only valid with
                            the --face_detect brute option

    You will need to provide your valid DFX license key and study ID, and proper credentials (email and password) to run the demo.

You can’t perform that action at this time.