Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to add more AUs? #8

Open
wdear opened this issue Nov 22, 2018 · 6 comments
Open

How to add more AUs? #8

wdear opened this issue Nov 22, 2018 · 6 comments

Comments

@wdear
Copy link

wdear commented Nov 22, 2018

Hi NumesSanguis,
Thanks for your sharing.
To realize more detailed expressions of the avatar in Unity 3D,I think I need to transition more AUs to Blendshapes,which means more AU0X.json in file FACSvatar-master\modules\process_facstoblend\au_json (only 20 .json files there).
if I thought right,where can I get more AU0X.json files,e.g AU07.json ?Or should I write these files manually?if so, is there any mapping laws between AUs and blendshapes that I can follow at?
Thanks again.

@dza6549
Copy link

dza6549 commented Nov 22, 2018

@wdear

I found these today 48 AU https://sharecg.com/v/92621/gallery/21/DAZ-Studio/FaceShifter-For-Genesis-8-Female

For Daz G8 not MBLab. But it might help you. Still not 64 AU. And MBLab blendshapes will be different to DAZ blendshapes/morphs.

I guess you know this? https://www.cs.cmu.edu/~face/facs.htm

@NumesSanguis
Copy link
Owner

Hey @wdear

MBLAB models come with their own Blend Shapes. Since these Blend Shapes are not compatible with FACS, I've created the .json files myself by going over all Blend Shapes available and comparing it to other resources such as the FACS manual. I only did the AUs matching OpenFace's output except for AU07. Personally I noticed that using AU07 would interfere with other AUs due to the limited nature of Blend Shapes available to MBLAB models.

You can either add more .json files to "au_json" or, if you're not satisfied with the current conversion, create a new folder with .json files. E.g. "au_advanced".
Then you can run the process_facstoblend module with the following command: python main.py --au_folder au_advanced.

A good start is: https://imotions.com/blog/facial-action-coding-system/
But I heard these GIFs do not always match the official FACS manual.
Or what @dza6549 suggested.

Please make a pull request with the new .json files if you do decide to make them :)
Also, let me know if you want to work on asymmetric AUs (left side of face values different from right), because the code at present does only symmetric conversion. The reason for this is that so far no input module provides separate left and right AU intensity values.

@wdear
Copy link
Author

wdear commented Nov 23, 2018

@NumesSanguis @dza6549
Thanks a lot.
Actually I've just made clear two main limitations for detailed expressions: 1.OpenFace can only recognize a subset of AUs 2.a specific model has its own range of blendshapes, which I should have figured out before this issue. :(
So in my scenario I decide to:

  1. use models with more blendshapes(especially on 'lip') like @dza6549 metioned
  2. further on recognizing more AUs if needed, and I think this is the precondition for add more .json files , or these files won't be used because OpenFace has no output for them? @NumesSanguis
  3. asymmetric AUs is worth digging into, but I'm just getting started here and may need more time :)

@NumesSanguis
Copy link
Owner

Unfortunately OpenFace has that limitation. When I inquired the makers of OpenFace about more/asymmetric AUs, they said they couldn't do this due to available AU databases not scoring intensity on all AUs, so no machine learning model can learn them. FACSvatar is not dependent on OpenFace, however. If some other input module send a message to the bridge module formatted in a similar way, all other components will still work like normal. That's the whole idea of the modular approach FACSvatar takes. It wants to be as general as possible.

  1. Same reason as above, if those other models have either FACS based blend shapes, or someone makes a conversion module as done for MBLAB models, you can still use the rest as is. Let me know if you need help with this :) (Take-away: the conversion with the .json files only needs to happen because the human models themselves don't have corresponding AU Blend Shapes)
  2. Even if you use more .json files, as long as no input module provides more AUs as input (in real-time), it can not magically create more information. In case of MBLAB models, AU data is passed through the bridge module onto the facstoblend module and in this module, the message content (which is formatted as a json string) is looked up in the au_json folder through name string matching. So If no name corresponding .json file is found, the data is just ignored.
  3. Good luck!

@dza6549
Copy link

dza6549 commented Nov 23, 2018

It might be possible to train a new OpenFace model with synthetic data i.e. not pics of real people but pics of 3d heads.

I certainly don't have the Machine Learning knowledge to optimize the design of a new OpenFace model. Unity has the ML-Agents module to TensorFlow and has been experimenting with synthetic data. See https://blogs.unity3d.com/2018/09/11/ml-agents-toolkit-v0-5-new-resources-for-ai-researchers-available-now/

Conceivably one might use Unity to produce the several tens of millions of images necessary for training a new model with the DAZ FaceShifter morphs from intheflesh mentioned above or the Polywink sample available here https://www.polywink.com/9-60-blendshapes-on-demand.html (which I think might be based on FACS) under a variety of lighting conditions, with synthetic backgrounds and diverse camera angles, etc. However optimizing the Unity/TF/PPO learning algorithm is sadly beyond my capacity.

Also we need to consider how the extra capacity to detect additional AUs and asymmetric AUs will benefit the project. As we have discussed unless the model has the required blendshapes then the additional detection capacity will not be used. On the other hand extra detection capacity might motivate modellers to include more AU blendshapes in their 3d models. Another variable is that I can only find a limited subset of the complete set FACS AUs online. Most researchers appear to be using a subset and not the full set. In addition the emotional FACS used by others appears to be limited to only ~7 emotions.

Additional emotions are mentioned here https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4734883/
In 1982, he postulated six basic emotions: anger, disgust, fear, happiness, sadness, and surprise, and supplemented these in the 1990s with 11 additional emotions (amusement, contempt, contentment, embarrassment, excitement, guilt, pride in achievement, relief, satisfaction, sensory pleasure, and shame)

This is all very interesting 👍

@NumesSanguis
Copy link
Owner

@dza6549 synthetic images might be a good resource (which could also be made in Blender), but the general rule in AI is that you need to train on real data as well, at least for signal input from the real world such as video. Synthetic data can definitely help make it stronger though. So a small database of videos with all AUs notated per frame + very large synthetic database seems like a good idea. Quality + quantity.

A friend of mine actually has created an add-on called FACSHuman (paper) for MakeHuman. At some point he'll release it as open source.

Personally I don't believe in facial configuration == emotion, as described by the theory of Paul Ekman. But if you're interested, a recent survey among emotion researchers (Ekman, P. What scientists who study emotion agree about. Perspectives on Psychological Science 11, 1 (2016), 31–34.):
Which emotion labels (out of a list of 18) should be considered to have been
empirically established:

anger (91%), fear (90%), disgust (86%), sadness (80%), and happiness (76%). Shame, surprise, and embarrassment were endorsed by 40%-50%. Other emotions, currently under study by various investigators drew substantially less support: guilt (37%), contempt (34%), love (32%), awe (31%), pain (28%), envy (28%), compassion (20%), pride (9%), and gratitude (6%).

For me the Theory of Constructed Emotion by Lisa Feldman Barrett seems more plausible.

But this is going off-topic in regards to the issue raised. If you want to continue about emotion theory, please post a topic here: https://www.reddit.com/r/FACSvatar/ and I'll be glad to continue ^_^
I would like to keep GitHub issues as a place for technical issues or where FACSvatar is directly involved. More general discussions would be better on reddit (also for visibility reasons).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants