This project uses the Kinect face-tracking SDK to replace a user's face with another, warping it to match the user. It was developed for my computer vision research project at the University of Canterbury.
A stand-alone version is available here:
You'll need to download the Microsoft Kinect SDK Runtime which installs the required device drivers for the Kinect. Alternatively you can use a different depth sensor, as long as it works with OpenNI2.
I developed this project in Visual Studio 2013 (VC++12), but it should work in other versions provided you can compile the required libraries.
Before you can compile this, make sure you set up the following libraries:
The program performs the following steps:
- Capture RGB + Depth video frame
- Detect head pose and face features using Kinect SDK
- Deform the Candide-3 mesh to the given head pose and face features
- Process the RGB + Depth frames using OpenCV
- Draw the RGB video frame
- Draw the texture-mapped candide-3 model in OpenGL, using a custom blend shader.
Side-note: This project uses a custom candide-3 face model instead of the Kinect SDK's internal model, since it's not easy to match vertices with tex coords using the internal model. This functionality is provided through the WinCandide-3 project (all source code named 'eru' is part of this project).
It's probably unlikely I'll do much more on this project since I have other commitments, but here's a list of things that could be improved upon in the future:
- Write a plugin for blender that can read and write the candide-3 model, so textures can be more accurately mapped. (I'm currently using the WinCandide-3 utility to approximately map the texture)
- Add support for multiple people
- Decrease tracking latency/improve face location. (Perhaps something like meanshift/optical flow + a kalman filter?)
If anyone improves upon this project, I'm happy to accept any pull requests!