Skip to content

Atishay9828/deep_learning

Repository files navigation

NeuroBioSense Multimodal Emotion Recognition

Research-grade multimodal emotion recognition system using:

  • Video face dynamics (FaceNet + temporal BiLSTM + attention)
  • Physiological signals (BVP/EDA/TEMP/ACC_X/ACC_Y/ACC_Z)
  • Cross-modal attention + soft reliability gating

Project Layout

  • emotion_recognition/models: all model components and full assembly
  • emotion_recognition/utils: preprocessing, dataset, metrics
  • emotion_recognition/scripts: stage-wise training + inference utilities
  • streamlit_app.py: deployment-ready web app

Quick Start

1) Create environment and install dependencies

python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -r requirements.txt

2) Run smoke syntax check

python -m compileall emotion_recognition

3) Stage 1: Face pretraining (FER2013 + CK+)

python -m emotion_recognition.scripts.train_face \
  --dataset-root Dataset \
  --fer-root Dataset/FER \
  --ck-csv Dataset/CKPLUS/ckextended.csv \
  --epochs 50 \
  --batch-size 64 \
  --output artifacts/facenet_stage1.pth

4) Stage 2: Signal pretraining (WESAD, optional)

python -m emotion_recognition.scripts.train_signal \
  --dataset-root Dataset \
  --wesad-root Dataset/WESAD \
  --epochs 50 \
  --batch-size 32 \
  --output artifacts/signal_stage2.pth

If NPZ files are missing, Stage 2 auto-prepares artifacts/wesad_train.npz and artifacts/wesad_val.npz from raw S*/S*.pkl files.

5) Stage 3: Multimodal fine-tuning (NeuroBioSense)

python -m emotion_recognition.scripts.train_multimodal \
  --dataset-root Dataset \
  --facenet-stage1 artifacts/facenet_stage1.pth \
  --signal-stage2 artifacts/signal_stage2.pth \
  --epochs 50 \
  --batch-size 8 \
  --output artifacts/multimodal_stage3.pth

Dataset readiness check (recommended first run)

python -m emotion_recognition.scripts.check_data --dataset-root Dataset

Inference

CLI single-clip inference

python -m emotion_recognition.scripts.predict_clip \
  --checkpoint artifacts/multimodal_stage3.pth \
  --video /path/to/clip.MP4 \
  --signal-csv /path/to/32-Hertz.csv \
  --participant-id P01 \
  --ad-code AD01 \
  --demographics-csv /path/to/Participant_demographic_information.csv

Real-time webcam inference

python -m emotion_recognition.scripts.inference_realtime \
  --checkpoint artifacts/multimodal_stage3.pth

Streamlit Deployment (Local)

streamlit run streamlit_app.py

Upload:

  • checkpoint (.pth)
  • video clip (.mp4)
  • optional signal CSV and participant/ad metadata for aligned multimodal inference

Hugging Face Spaces Deployment

Use a Streamlit Space and upload this repository.

Suggested settings:

  • SDK: streamlit
  • Python: 3.11
  • Startup command: streamlit run streamlit_app.py --server.port 7860 --server.address 0.0.0.0

GitHub Push Checklist

  1. Remove large local artifacts from Git tracking (checkpoints/data are ignored by default).
  2. Initialize git and commit.
  3. Push to GitHub.
git init
git add .
git commit -m "Initial multimodal emotion recognition implementation"
git branch -M main
git remote add origin https://github.com/<your-username>/<your-repo>.git
git push -u origin main

Notes

  • Validation/test split is participant-level to avoid leakage.
  • Stage 3 evaluation aggregates all windows per clip (mean or majority).
  • Checkpoint stores normalization stats used by deployment app.
  • NeuroBioSense 32-Hertz.csv files without participant/ad keys use a label-agnostic fallback segment strategy.

Final Project Submission Pack (Face + Physio + Multimodal)

Run one command to train/evaluate all three modes on binary valence and auto-generate final report files:

./emotion_recognition/scripts/run_final_project_suite.sh

Outputs:

  • artifacts/final_valence_face_only.json/.pth
  • artifacts/final_valence_signal_only.json/.pth
  • artifacts/final_valence_multimodal.json/.pth
  • artifacts/final_valence_metadata.json/.pkl
  • reports/final_project_report.md
  • reports/final_project_report.tex
  • reports/diagrams/data_pipeline.mmd
  • reports/diagrams/architecture.mmd

Optional LaTeX build:

pdflatex -output-directory reports reports/final_project_report.tex

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors