-
Notifications
You must be signed in to change notification settings - Fork 7
Emotional expressivity v2.0
Date completed | September 26th, 2023 |
Release where first appeared | OpenWillis v1.4 |
Researcher / Developer | Vijay Yadav |
import openwillis as ow
framewise, summary = ow.emotional_expressivity(filepath = 'video.mov' , baseline_filepath = 'video_baseline.mov')
We utilize deepface to quantify framewise intensity of the following emotions:
- Happiness
- Sadness
- Anger
- Fear
- Disgust
- Surprise
- Neutral (the absence of any emotion)
We also calculate a composite expressivity score, which averages the expressivity of each of the emotions above (except for neutral).
Framewise values for each variable, ranging from 0-1, are saved in framewise
.
In case a baseline input is provided, all values are baseline-corrected and normalized using the same method that is used by the Facial Expressivity** **function. The resulting normalized values range between -1 and 1, with negative values signifying expressivity for that emotion being lower than baseline and positive values signifying expressivity greater than baseline.
The framewise expressivity values are compiled for the video and saved in the summary
output, which contains the primary outcome measures of the function, namely the mean expressivity of each emotion over the course of the video.
Type | str |
Description | path to main video |
Type | str, optional |
Description | path to baseline video |
Type | data-type |
Description | dataframe with framewise output of facial emotion expressivity. columns are emotional expressivity measures, with the last column being composite emotional expressivity i.e. a mean of all individual emotions except neutral. range for these values is -1 to 1 in case of baselining and 0-1 otherwise. rows represent frames in the video. |
What the data frame looks like:
frame
|
angry
|
disgust
|
fear
|
happiness
|
sadness
|
surprise
|
neutral
|
composite
|
0
|
||||||||
1
|
||||||||
...
|
Type | data-type |
Description | dataframe containing the mean and standard deviation values for the columns in the framewise output.
|
What the data frame looks like:
happiness_mean
|
sadness_mean
|
anger_mean
|
fear_mean
|
disgust_mean
|
surprise_mean
|
neutral_mean
|
composite_mean
|
sadness_std
|
anger_std
|
fear_std
|
disgust_std
|
surprise_std
|
neutral_std
|
composite_std
|
Here, we use the sample data included as part of the repository to calculate emotional expressivity.
import openwillis as ow
framewise, summary = ow.emotional_expressivity(filepath = 'data/subj01.mp4', baseline_filepath = 'data/subj01_base.mp4')
framewise.head(2)
frame
|
angry
|
disgust
|
fear
|
happiness
|
sadness
|
surprise
|
neutral
|
composite
|
0
|
0.051450
|
0.000031
|
0.250485
|
-0.113090
|
0638163
|
-0.000234
|
-0.446323
|
0.137801
|
1
|
0.006272
|
-0.000004
|
0.281984
|
-0.113109
|
0.662542
|
-0.000263
|
-0.452409
|
0.139570
|
Below are dependencies specific to calculation of this measure.
Dependency | License | Justification |
deepface | MIT | Free, open-source, MIT
Emotion detection No action unit detection No facial landmark detection |
OpenWillis was developed by a small team of clinicians, scientists, and engineers based in Brooklyn, NY.
- Release notes
- Getting started
-
List of functions
- Facial Expressivity v2.0
- Emotional Expressivity v2.0
- Eye Blink Rate v1.0
- Speech Transcription with Vosk v1.0
- Speech Transcription with Whisper v1.1
- Speech Transcription with AWS v1.1
- Speaker Separation with Labels v1.0
- Speaker Separation without Labels v1.0
- Vocal Acoustics v2.0
- Speech Characteristics v3.0
- GPS Analysis v1.0
- Research guidelines