Welcome to the official implementation of our TMLR paper: Meta-Learning Approach for Joint Multimodal Signals with Multimodal Iterative Adaptation.
This repository offers the tools and scripts needed to replicate the experiments detailed in our paper. It includes the implementation of the MIA algorithm, as well as several baseline methods like CAVIA, MetaSGD, ALFA, and GAP, to enable thorough comparison.
To get started, follow these steps to set up your environment:
git clone git@github.com:yhytoto12/mia.git
cd mia
conda create -n mia python=3.10
conda activate mia
pip install -r requirements.txtAll required packages are listed in requirements.txt.
We provide a dataset comprising joint synthetic functions, including sine, tanh, Gaussian, and ReLU functions.
The CelebA dataset, annotated with RGB images, normal maps, and sketches, serves as a real-world testbed for multimodal learning.
You can download both datasets from our Google Drive. After downloading, place them in the datasets/synthetic or datasets/celeba for each dataset.
All scripts necessary to run experiments are located in the runner/ directory. We provide scripts to run experiments using different algorithms, including our proposed MIA method.
To run experiments on the synthetic multimodal dataset using MIA:
python runner/run_synthetic.py MIAYou can replace MIA with any other method such as CAVIA, MetaSGD, ALFA, or GAP to test those algorithms.
To run experiments on the CelebA dataset using MIA:
python runner/run_celeba.py MIAAs with synthetic experiments, you can substitute MIA with other method names to run baseline experiments.
If you find our work useful in your research, please consider citing our paper:
@article{
lee2024mia,
title={Meta-Learning Approach for Joint Multimodal Signals with Multimodal Iterative Adaptation},
author={Sehun Lee and Wonkwang Lee and Gunhee Kim},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2024},
url={https://openreview.net/forum?id=LV04KBaIQt},
}