This repository contains codes of the ML701 capstone project at MBZUAI.
Multi-modal learning aims to build models that can process and relate information from multiple modalities. Hateful memes are a recent trend of spreading hate speech on social platforms. The hate in a meme is conveyed through both the image and the text; therefore, these two modalities need to be considered, as singularly analyzing embedded text or images will lead to inaccurate identification.
python-3.10.10
git clone https://github.com/ML-Project-Team-G11/Hatememedetection
cd Hatememedetection
pip install git+https://github.com/ML-Project-Team-G11/CLIP.git
pip -r install requirements.txt
python main.py
Some related literature we referenced can be found here
The facebook HatefulMeme Challenge Dataset found here and part of the Memotion 7k dataset was used for this project.
label_memotion.jsonl - contains extracted texts from hate memes and image file name
- architecture.py - contains model architecture definitions
- config.py - contains model configurations assignment class
- dataset.py - contains dataset loading class
- logger.py - contains wandb logger setup
- parser.py - contains code for parsing arguments from the command line
- run.sh - contains code for parsing arguments from the command line
- add_memotion_dataset.ipynb - contains code for adding Memotion dataset to train set
- hatememe_clip.ipynb - contains our initial implementation of the project
- updated_tsne_plots.ipynb - tsne visualization for the datasets