FaceSync: Open source framework for recording facial expressions with head-mounted cameras
The FaceSync toolbox provides 3D blueprints for building the head-mounted camera setup described in our paper. The toolbox also provides functions to automatically synchronize videos based on audio, manually align audio, plot facial landmark movements, and inspect synchronized videos to graph data.
To install (for osx or linux) open Terminal and type
pip install facesync
git clone https://github.com/jcheong0428/facesync.git
then in the repository folder type
python setup.py install
sudo apt-get install libav-tools
brew install ffmpeg
brew install libav
also requires following packages:
You may also install these via
pip install -r requirements.txt
Recommended Processing Steps
- Extract Audio from Target Video
- Find offset with Extracted Audio
- Trim Video using Offset. *If you need to resize your video, do so before trimming. Otherwise timing can be off.
from facesync.facesync import facesync # change file name to include the full video_files = ['path/to/sample1.MP4'] target_audio = 'path/to/cosan_synctune.wav' # Intialize facesync class fs = facesync(video_files=video_files,target_audio=target_audio) # Extracts audio from sample1.MP4 fs.extract_audio() # Find offset by correlation fs.find_offset_corr(search_start=14,search_end=16) print(fs.offsets) # Find offset by fast fourier transform fs.find_offset_fft() print(fs.offsets
FaceSync provides handy utilities for working with facial expression data.
Manually align the audios with AudioAligner.
Plot facial landmarks and how they change as a result of Action Unit changes.
Use the VideoViewer widget to play both video and data at the same time (only available on Python).
Please cite the following paper if you use our head-mounted camera setup or software.