-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can FACSvatar be used for lip sync? #3
Comments
Yesterday the question would have been that it's near real-time (couldn't get the data in real-time from OpenFace), but the help of a professor in my lab, we almost got real-time to work (probably today it works ^_^): OpenFace issue about real-time. Technically lip sync should work. The Facial Action Coding System (FACS) also includes the muscles of the mouth. When I do a promotion video for FACSvatar v0.3, I'll include a speaking person. The data still has to go through the Python scripts, because a smoothing and a FACS --> Blend Shape conversion is taking place here. Also, if you want to improve the lip-sync, you might want to do some post processing or audio-->FACS mix with the original data. So it's also handy to do this in modules outside a game-engine. Currently, I don't have time to also support Unreal Engine, but since that engine is offering Python support with its latest release, that integration shouldn't be too hard. Of course, ZeroMQ is not limited to Python, C++ would also work. So, if you want to use that engine, please write some ZeroMQ subscriber code in that engine, would love it to include it in this project :) |
Hi NumesSanguis Unfortunately I'm unable to commit to writing code for Unreal. It does make sense though. I've read about your modular approach and was thinking about the various (open source or free for devs) output engines that might be swapped in for this project (and also get this project some more attention from a wider user base). Blender may also be in the list. The next 2.8 version will have real time rendering with Eevee. Presently I can understand 2 use cases for the engine side of things - user needs realtime or user records and possibly edits animation data from open face. Also head shapes are potentially another module - realistic heads like ManualBastioniLab (and DAZ) and then possibly cartoon heads like this one from Blender user Pepe. (https://vimeo.com/190801903) (though I am not sure of the shapekey/bone/driver setup there) I've been following BlenderSushi's iPhoneX project for facial mocap. It seems to me that the requirement for cartoons is to radically emphasize emotions. I was wondering if one could detect the rate of change in the OpenFace stream on particular AUs and then emphasize the blendshape with the FACS dictionary. The JALI project below has an interesting taxonomy of emotions and associated Jaw/Lip curves. This project (http://www.dgp.toronto.edu/~elf/jali.html) also uses the FACS standard for lip syncing to audio. It is not realtime however because it processes the audio offline. However if FACSvatar can run in real time then the need to process audio to extract phoneme and visemes is overcome. Sorry to blabber on :) Cheers |
Hey daz6549, I was wondering if you were thinking of a different game engine than Unity 3D, so no worries you cannot do that. Just produce the code you need for your project, share it, and FACSvatar will become more and more useful :) Streaming into Blender is already working! I think I just wrote the code ugly, because waiting for messages now happens in the main loop of Blender. If I move this to a thread, I think real-time streaming into Blender should work. It works good enough for now for non-realtime: https://www.youtube.com/watch?v=ImB3it_26bc For exaggerating the facial expressions, other people seems to share the same idea. Yes, you can, here are more details: https://blenderartists.org/forum/showthread.php?391401-Addon-Manuel-Bastioni-Lab-turns-Blender-in-a-laboratory-for-3d-humanoids-creation/page64 The JALI project is also great. Hopefully we can reach similar levels with an Open Source implementation. It's good to see so much excitement ^_^ |
Apologies for responding on a closed issue, but I am curious if FACSvatar is able to extract visemes from audio or if it doesn't include audio in its processing step at all? |
Hello @lightnarcissus , Great we reached a similar idea of how to integrate OpenFace in a project using ZeroMQ!
Some notes:
|
Hi NumesSanguis
I hope it is OK to ask questions here? If not I can go to Blender Artists or what ever is better.
Have you tried using FACSvatar for real time lip sync facial motion capture?
I am not sure if I understand that FACSvatar can be used in real time or if the output from OpenFace must be processed by the Python scripts first before routing the output to the Game Engine.
Cheers
The text was updated successfully, but these errors were encountered: