Skip to content

yingbo-ma/Predicting-Peer-Satisfaction-EDM2022

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predicting-Peer-Satisfaction-EDM2022

Description

This is the repository for the following paper at the EDM conference 2022:

Investigating Multimodal Predictors of Peer Satisfaction for Collaborative Coding in Middle School

Introduction

Collaborative learning is a complex process during which two or more learners exchange opinions, construct shared knowledge, and solve problems together. While engaging in this interactive process, learners' satisfaction toward their partners plays a crucial role in defining the success of the collaboration. If intelligent systems could predict peer satisfaction early during collaboration, they could intervene with adaptive support. However, while extensive studies have associated peer satisfaction with factors such as social presence, communication, and trustworthiness, there is no research on automatically predicting learners’ satisfaction toward their partners. To fill this gap, this paper investigates the automatic prediction of peer satisfaction by analyzing 44 middle school learners’ interactions during collaborative coding tasks. We extracted three types of features from dialogues: 1) linguistic features indicating semantics; 2) acoustic-prosodic features including energy and pitch; and 3) visual features including eye gaze, head pose, facial behaviors, and body pose. We then trained several regression models to predict the peer satisfaction scores that learners received from their partners. The results revealed that head position and body location were significant indicators of peer satisfaction: lower head and body distances between partners were associated with more positive peer satisfaction. This work is the first to investigate the multimodal prediction of peer satisfaction during collaborative problem solving, and represents a step toward the development of real-time intelligent systems that support collaborative learning.

Authors

Yingbo Ma, Mehmet Celepkolu, Kristy Elizabeth Boyer

Citation


Code Structure

Directories:

(1) Features: this folder contains Python codes for feature extraction and post-processing described in Section 4.

(2) Prediction_Models: this folder contains Python codes for prediction models described in Section 5.

(3) For other data postprocessing details, please refer to my another personal repo: https://github.com/yingbo-ma/Daily-Research-Testing

(4) Images: this folder contains the images displaying each feature's variation over time. We only provided feature: head_distance in the paper since it is the only significant feature found in this study. The images in this folder show the variations for other non-significant features.

Prerequisites

Basics

Python3 

Audio-based Feature Extraction: Loudness, Pitch, Shimmer, Jitter, MFCCs

audiofile v1.0.0
opensmile v2.2.0

Language-based Feature Extraction: Word Count, Speech Rate, Word2Vec, Pre-trained BERT

nltk v3.5
gensim v3.8.0
bert-for-tf2 v0.14.9
tensowflow-gpu v2.4.1

Video-based Feature Extraction: Eye Gaze, Head Pose, Facial AUs, Body Pose

Feature extraction process was performed through command line arguments.

Prediction Models

tensowflow-gpu v2.4.1
NVIDIA GPU + CUDA CuDNN

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages