Skip to content

myEverthing/DCAF-main

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DCAF

Acknowledgement

This repository is based on the original implementation of ConFEDE. We sincerely thank the authors of ConFEDE for open-sourcing their code.


Dataset

The dataset used in this project is available for download at:
GitHub Repository – MMSA


Requirements

Python: 3.10.16

Python packages:

  • matplotlib==3.6.3
  • pytorch-metric-learning==0.9.99
  • pytorch-pretrained-bert==0.6.2
  • torch==2.6.0
  • torchaudio==2.6.0
  • torchsummary==1.5.1
  • torchvision==0.21.0
  • transformers==4.47.1
  • ujson==4.0.2

Tip: you can also create a requirements.txt with the list above and run:

pip install -r requirements.txt

Checkpoints

Original DCAF Model Checkpoints:
Google Drive Link


Training

To train the model, you need to first train the encoders and then train the fusion network. Run the following commands sequentially:

# 1) Train the Encoders
python main.py

# 2) Train the Fusion Network
python main_fusion.py

About

Official implementation of DCAF (Dynamic Affective Consistency–Aware Fusion) for multimodal sentiment analysis. DCAF integrates Trimodal Cross-Attention (TMCA) and Contrastive Unimodal Label Distillation (CULD) to model cross-modal agreement and conflict across text, audio, and visual signals.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors