Skip to content
DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model, ECCV 2018
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md First version Feb 21, 2019
VGG16_rn.py First version Feb 21, 2019
data_generator.py First version Feb 21, 2019
deepGUM.py First version Feb 21, 2019
log_gauss_densities.py First version Feb 21, 2019
test.py First version Feb 21, 2019

README.md

DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model

Introduction.

This is a Keras implementation of the work : DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model Stéphane Lathuilière, Pablo Mesejo, Xavier Alameda-Pineda, Radu Horaud, ECCV 2018

For more details pdf

Tested with keras 1.1.0 with theano backend and python 2.7.12 Requieres the installation of scikit-learn.


How to run:

trainingAnnotations.txt must contain the list of the training images followed by the targets:

img_name_1.jpg y1 y2 y3
img_name_2.jpg y1 y2 y3 
...

testAnnotations.txt must contain the list of the test images with the same format

Download the VGG16 weights

Run the following command:

THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,exception_verbosity='high' python deepGUM.py  $rootpathData trainingAnnotations.txt testAnnotations.txt $DIM $JOB_ID $OPTIONS

where JOB_ID is a job id used to save the network weights. You can give any number. $rootpathData is the path to your dataset folder. The file vgg16_weights.h5 must be moved in the $rootpathData folder.

DIM is the output space dimension

OPTIONS:

  • probability model:
    • -i : Each dimension of the samples are treated separately.
    • -p : When predictions are landmarks, we consider that the pair (x,y) can be an outlier and not x or y only.
    • else: the full sample is an inlier or an outlier
    • -d : diagonal covriances
    • -iso : isotropic
  • -u: update the uniform parameter distribution in the EM
  • Validation mode:
    • -rnTra: the mixture parameters computed on the training set are used to compute the validation loss. In that case many samples may be considered as outlier since the variance of the validation is usually larger thatn for the training set.
    • -rnHard: Same but hard decision to compute the loss (rn<0.5 -> rn=0 otherwise rn=1)
    • -rnEqui: we discard from the validation the same proportion od outlier than in the training set. It avoids the problem of rnTra

Support

For any question, please contact Stéphane Lathuilière.

You can’t perform that action at this time.