clmtrackr is a work-in-progress javascript library for precise tracking of facial features in videos or images. It currently is an implementation of constrained local models fitted by regularized landmark mean-shift, as described in Jason M. Saragih's paper. Due to the heavy calculations involved, the current version of the library requires WebGL with support for floating-point textures.
The library provides a generic face model that was trained on the MUCT database. Xiaoguang Yan also kindly allowed us to use the trained model from his implementation of Constrained Local Models. The aim is to provide a model that is trained on a larger set of facial expressions, as well as a model builder for building your own facial models.
The library requires ccv (for initial face detection) and numeric.js (for matrix math).
For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.
Note that currently the code does not contain any mechanism for detecting and reinitializing when the tracking fails, so the tracking will fail (badly) at around 0:25 in the video.
clmtrackr is distributed under the MIT License