Skip to content

hanqi-qi/Matte

Repository files navigation

Counterfactual Generation with Identifiability Guarantees

Neurips23:Counterfactual Generation with Identifiability Guarantees

Motivation: Counterfactual generation lies at the core of various machine learning tasks. Existing disentangled methods crucially rely on oversimplified assumptions, such as assuming independent content and style variables, to identify the latent variables, even though such assumptions may not hold for complex data distributions. This problem is exacerbated when data are sampled from multiple domains since the dependence between content and style may vary significantly over domains.

Solutions: In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task. We provide identification guarantees for such latent-variable models by leveraging the relative sparsity of the influences from different latent variables. Our theoretical insights enable the development of a doMain AdapTive counTerfactual gEneration model, called (MATTE).

The data generation process: The grey shading indicates the variable is observed. x is text, c and s are content and style variables, the noise s is the exgenous variable, indepedent to c, u is the domain index.

Code Structure

bash scripts/train.sh  #train the model
bash scripts/transfer.sh #tranfer to target attribute based on the well-trained latent space

Citation

If you find our work useful, please cite as:

@inproceedings{
yan2023counterfactual,
title={Counterfactual Generation with Identifiability Guarantees},
author={Hanqi Yan and Lingjing Kong and Lin Gui and Yuejie Chi and Eric Xing and Yulan He and Kun Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=cslnCXE9XA}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published