Skip to content

Releases: patrikhuber/superviseddescent

v0.4.1

15 Jul 23:36
Compare
Choose a tag to compare

This update adds a pre-trained model (trained on COFW) that achieves a state of the art average error of 0.072 (in percent of the inter-eye-distance) on COFW-test (a884b3f). (The model in our paper has an error of 0.073.)

Additional minor improvements:

  • Added a custom CMake target so that the library headers show up in IDEs (42f9c7e)
  • Added the debug solver as VerbosePartialPivLUSolver to the library (5f56b41)
  • Fix: Added the missing verbose_solver header (2f85b13)

v0.4.0

28 Jun 21:04
Compare
Choose a tag to compare

This is a major update, containing additions and updates to the library and landmark detection.

The library now contains an implementation of the Robust Cascaded Regression facial landmark detection and features a pre-trained model. Using it requires only a couple of lines of code (see apps/rcr/rcr-detect.cpp).

Major changes to the library:

  • Added adaptive regressor update (NormalisationStrategy - e.g. inter-eye-distance dependent)
  • Changed from boost::serialization to cereal, and directly included the cereal headers into our project
  • Split the demo apps to: examples/, which contains hello-world examples for the library, and apps/, which contains more-involved apps like rcr-detect
  • Added the RCR code to the library under include/rcr/

Major changes to the RCR landmark detection:

  • Added adaptive HOG update, i.e. a different window-size and HOG parameters in each regressor level
  • Added a pre-trained landmark detection model with 22 landmarks

Minor changes:

  • rcr-train is build with openmp flags enabled
  • Set the CMake default to not build the tests and documentation
  • Updated the hello-world landmark detection to only train with 5 landmark
  • Changed code style to snake_case for variables and functions
  • "Included" vlhog (hog.c) in a header-only way
  • Enabled Visual Studio folders in CMake

v0.3.0

30 Apr 11:02
Compare
Choose a tag to compare

This update again increases the speed of the regressor training a lot (10x-20x). In most cases, the learning should not take more than a few minutes anymore. This is achieved by separating the LinAlg solver from the regressor (1ffa80e), using a PartialPivLU decomposition (c7ef596) and finally we set PartialPivLUSolver as the default (02b4a47). The main thing that matters now is only how fast the projection/feature extraction is.

Additionally, the PartialPivLUSolver can run in parallel if compiled with -fopenmp (gcc/clang) or /openmp (VS).

I posted an analysis of Eigen's different methods to solve linear system of equations on my blog.

v0.2.0

15 Feb 12:58
Compare
Choose a tag to compare

This is a minor update that mostly increases speed (a lot!) and fixes a few minor issues.

Most notable changes since v0.1.0:

  • eb2df7b Fixed install target issue: the header directory structure didn't get copied, resulting in ThreadPool.h not ending up in the utils subdirectory.
  • 8034a49 Renamed x, x0, y and H to parameters, initialisations, templates and ProjectionFunction to make their meaning clearer.
  • fb39fd2 Added parallelisation via ThreadPool to the testing part
  • 2e37160 Using hardware_concurrency() to detect the number of threads for the pool (or 4 as fallback)
  • 28caedb Parallelised the execution of the transformation function H in the training
  • e4b43fb Replaced all matrix operations in the regressor-learning with Eigen, resulting in a 2x-10x speedup of the learning (even more if a lot of training examples are used)