In order to motivate the user to change her/his behavior or attitudes, for instance to practice physical activities to improve her/his well-being, virtual agents should have persuasive capabilities. The persuasiveness of the virtual agent not only depends on its speech but also on its non-verbal behavioral cues. In this paper, we propose the new tool called THRUST (from neuTral Human face to peRsUaSive virTual face), to automatically generate the head movements and facial expressions of a persuasive virtual character from a video of a human. Combining a machine learning approach on a corpus of persuasive human speech and a convolution-based method, we propose a model, based on real data of persuasive human message, that transforms the non-verbal behavior of the human expressed in a video to a persuasive non-verbal behavior replicated on a virtual face.
To illustrate THRUST model, we propose the overview bellow:
In nutshell, THRUST model takes as input a video of a speaker talking about a specific topic in a neutral way. The input is not limited to real-time data video, it can be webcam video, recorded video files or sequences of images. The important aspect is to be able to extract the facial landmarks, the head poses, the eye gaze and the facial Action Unit (AUs) from the video wich will be the input
This program HAS NOT been tested intensively, it is believed to do what it is supposed to do, However, you are welcome to check it if you have your own corpus and your own data.
Authors : Afef Cherni, Roxane Bertrand and Magalie Ochs
Contact: cherni.afef@univ-amu.fr
Version : 1.0 Date : July 2022
- 1- Clone the repository to retrieve all files from the THRUST Project
- 2- Check if you have Original_Data folder, POM_Data folder, THRUST_Tool.py, THRUST_Test and THRUST_Evaluation
- 3- To test our proposed tool, you should:
a) Run Persuasion_Test: this code uses the input
b) Run THRUST_Evaluation: this script checks if the output of our THRUST model are classified as persuasive or neutral. For that, we need to download the Random Forest classifier (optimized and stored as "best_rf.joblib")
c) Run _THRUST_Test: this scirpt verifies if your modelization output is classified as persuasive or not.
- 1- Python realease (https://www.python.org/downloads/)
- 2- Greta platform to generate video with embodied conversational agent (https://github.com/isir/greta)
- 3- OpenFace toolkit (https://github.com/TadasBaltrusaitis/OpenFace)
- If you would like to test our THRUST tool with your own video, you should just extract the head poses and facial action unit (you can use for this OpenFace toolkit) and save the extracted file (in .csv format) in OriginalData folder. Preferably, name your file like the examples given, otherwise you have to modify the code, it's up to you to do what is necessary in this case ;-)
- 1- To explore POM corpus (https://github.com/eusip/POM/)
- 2- ...
Demonstration.mp4
- Afef CHERNI (cherni.afef@univ-amu.fr, cherni.afef@lis-lab.fr)
- Magalie Ochs (magalie.ochs@univ-amu.fr, magalie.ochs@lis-lab.fr)