Skip to content

Javascript library for precise tracking of facial features via Constrained Local Models

Notifications You must be signed in to change notification settings

richardhahahaha/clmtrackr

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

clmtrackr

tracked face

clmtrackr is a work-in-progress javascript library for precise tracking of facial features in videos or images. It currently is an implementation of constrained local models fitted by regularized landmark mean-shift, as described in Jason M. Saragih's paper. Due to the heavy calculations involved, the current version of the library requires WebGL with support for floating-point textures.

The library provides a generic face model that was trained on the MUCT database. Xiaoguang Yan also kindly allowed us to use the trained model from his implementation of Constrained Local Models. The aim is to provide a model that is trained on a larger set of facial expressions, as well as a model builder for building your own facial models.

The library requires ccv (for initial face detection) and numeric.js (for matrix math).

For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.

Examples

Note that currently the code does not contain any mechanism for detecting and reinitializing when the tracking fails, so the tracking will fail (badly) at around 0:25 in the video.

License

clmtrackr is distributed under the MIT License

About

Javascript library for precise tracking of facial features via Constrained Local Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published