Skip to content

CausalLearningAI/multiview-crl

Repository files navigation

Multi-View Causal Representation Learning with Partial Observability

OpenReview | arXiv | BibTeX

Latent generative model with dependent causal variables

Official code for the ICLR 2024 paper (spotlight - top5%) Multi-View Causal Representation Learning with Partial Observability. This work was performed by Dingling Yao, Danru Xu, Sébastien Lachapell, Sara Magliacane, Perouz Taslakian, Georg Martius, Julius von Kügelgen and Francesco Locatello. Please cite us when making use of our code or ideas.

Installation

Python PyTorch Code style: black virtualenv

cd $PROJECT_DIR
mamba env create -f env.yaml
mamba activate crl_venv
pre-commit install

Numerical Experiment

# train
python main_numerical.py

# evaluate
python main_numerical --evaluate

Multimodal Experiment

Download the dataset Multimodal3DIdent [Daunhawer et al. ICLR 2023]:

# download and extract the dataset
$ wget https://zenodo.org/record/7678231/files/m3di.tar.gz
$ tar -xzf m3di.tar.gz

Training and evaluation:

# train a model with three input views (img0, img1, txt0)
python main_multimodal.py --dataroot "$PATH_TO_DATA"  --dataset "multimodal3di"

# evaluate
python main_multimodal --dataroot "$PATH_TO_DATA" --dataset "multimodal3di" --evaluate

Acknowledgements

This implementation is built upon multimodal-contrastive-learning and ssl_identifiability.

BibTex

@inproceedings{
    yao2024multiview,
    title={Multi-View Causal Representation Learning with Partial Observability},
    author={Dingling Yao and Danru Xu and S{\'e}bastien Lachapelle and Sara Magliacane and Perouz Taslakian and Georg Martius and Julius von K{\"u}gelgen and Francesco Locatello},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=OGtnhKQJms}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages