For tracking in video, it is recommended to use a browser with WebGL support, though the library should work on any modern browser.
For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.
- Tracking in image
- Tracking in video
- Face substitution
- Face masking
- Realtime face deformation
- Emotion detection
/* clmtrackr libraries */ <script src="js/clmtrackr.js"></script> <script src="js/model_pca_20_svm.js"></script>
The following code initiates the clmtrackr with the model we included, and starts the tracker running on a video element.
You can now get the positions of the tracked facial features as an array via
You can also use the built in function
draw() to draw the tracked facial model on a canvas :
See the complete example here.
clmtrackr is distributed under the MIT License