OpenReview | arXiv | BibTeX
Official code for the ICLR 2024 paper (spotlight - top5%) Multi-View Causal Representation Learning with Partial Observability. This work was performed by Dingling Yao, Danru Xu, Sébastien Lachapell, Sara Magliacane, Perouz Taslakian, Georg Martius, Julius von Kügelgen and Francesco Locatello. Please cite us when making use of our code or ideas.
cd $PROJECT_DIR
mamba env create -f env.yaml
mamba activate crl_venv
pre-commit install
# train
python main_numerical.py
# evaluate
python main_numerical --evaluate
Download the dataset Multimodal3DIdent [Daunhawer et al. ICLR 2023]:
# download and extract the dataset
$ wget https://zenodo.org/record/7678231/files/m3di.tar.gz
$ tar -xzf m3di.tar.gz
Training and evaluation:
# train a model with three input views (img0, img1, txt0)
python main_multimodal.py --dataroot "$PATH_TO_DATA" --dataset "multimodal3di"
# evaluate
python main_multimodal --dataroot "$PATH_TO_DATA" --dataset "multimodal3di" --evaluate
This implementation is built upon multimodal-contrastive-learning and ssl_identifiability.
@inproceedings{
yao2024multiview,
title={Multi-View Causal Representation Learning with Partial Observability},
author={Dingling Yao and Danru Xu and S{\'e}bastien Lachapelle and Sara Magliacane and Perouz Taslakian and Georg Martius and Julius von K{\"u}gelgen and Francesco Locatello},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=OGtnhKQJms}
}