A Python library for computing feature importance using disentangled methods, inspired by SHAP.
Current release: 0.0.2
FDFI (Flow-Disentangled Feature Importance) is a Python module that provides interpretable machine learning explanations through disentangled feature importance methods. This package implements both DFI (Disentangled Feature Importance) and FDFI (Flow-DFI) methods. Similar to SHAP, FDFI helps you understand which features are driving your model's predictions.
- π― Multiple Explainer Types: Tree, Linear, and Kernel explainers for different model types
- π§ OT-Based DFI: Gaussian OT (OTExplainer) and Entropic OT (EOTExplainer)
- π Flow-DFI: FlowExplainer with CPI and SCPI methods for non-Gaussian data
- π Rich Visualizations: Summary, waterfall, force, and dependence plots
- π§ Easy to Use: Simple API similar to SHAP
- π Extensible: Built with modularity in mind for future enhancements
git clone https://github.com/jaydu1/FDFI.git
cd FDFI
pip install -e .Use pyproject.toml extras:
pip install -e ".[dev]"
pip install -e ".[plots]"
pip install -e ".[flow]"import numpy as np
from fdfi.explainers import OTExplainer
# Define your model
def model(X):
return X.sum(axis=1)
# Create background data
X_background = np.random.randn(100, 10)
# Create an explainer
explainer = OTExplainer(model, data=X_background, nsamples=50)
# Explain test instances
X_test = np.random.randn(10, 10)
results = explainer(X_test)
# Confidence intervals (post-hoc)
ci = explainer.conf_int(alpha=0.05, target="X", alternative="two-sided")By default, conf_int() now uses:
var_floor_method="mixture"margin_method="mixture"
This improves stability for weak effects and avoids ad hoc thresholding in many use cases. You can still override both methods explicitly if needed.
EOTExplainer supports adaptive epsilon, stochastic transport sampling, and
Gaussian/empirical targets:
from fdfi.explainers import EOTExplainer
explainer = EOTExplainer(
model.predict,
X_background,
auto_epsilon=True,
stochastic_transport=True,
n_transport_samples=10,
target="gaussian", # or "empirical"
)
results = explainer(X_test)FlowExplainer uses normalizing flows for non-Gaussian data, supporting both CPI (Conditional Permutation Importance) and SCPI (Sobol-CPI):
-
CPI: Average predictions first, then squared difference:
$(Y - E[f(\tilde{X})])^2$ -
SCPI: Squared differences first, then average:
$E[(Y - f(\tilde{X}_b))^2]$
from fdfi.explainers import FlowExplainer
# Create explainer with CPI (default)
explainer = FlowExplainer(
model.predict,
X_background,
fit_flow=True,
method='cpi', # 'cpi', 'scpi', or 'both'
num_steps=200, # flow training steps
nsamples=50, # counterfactual samples
sampling_method='resample', # 'resample', 'permutation', 'normal', 'condperm'
)
results = explainer(X_test)
# results['phi_Z']: Z-space importance
# results['phi_X']: same as phi_Z (Z-space methods)
# Confidence intervals
ci = explainer.conf_int(alpha=0.05, target="Z", alternative="two-sided")Disentangled explainers (OTExplainer, EOTExplainer, and FlowExplainer) report two diagnostics with qualitative labels (GOOD / MODERATE / POOR) using consistent [FDFI][DIAG] logging:
- Latent independence (median dCor) β lower is better (thresholds: <0.10 good, <0.25 moderate).
- Distribution fidelity (MMD) β lower is better (thresholds: <0.05 good, <0.15 moderate).
Example log:
[FDFI][DIAG] Flow Model Diagnostics
[FDFI][DIAG] Latent independence (median dCor): 0.0421 [GOOD] β lower is better
[FDFI][DIAG] Distribution fidelity (MMD): 0.0187 [GOOD] β lower is better
Access diagnostics directly:
diag = explainer.diagnostics
print(diag["latent_independence_median"], diag["latent_independence_label"])
print(diag["distribution_fidelity_mmd"], diag["distribution_fidelity_label"])For advanced users, flow models can be trained separately:
from fdfi.models import FlowMatchingModel
# Train flow model externally
flow_model = FlowMatchingModel(X_background, dim=X_background.shape[1])
flow_model.fit(num_steps=500, verbose='final')
# Set pre-trained flow
explainer = FlowExplainer(model.predict, X_background, fit_flow=False)
explainer.set_flow(flow_model)FDFI/
βββ fdfi/ # Main package directory
β βββ __init__.py # Package initialization
β βββ explainers.py # Explainer classes
β βββ plots.py # Visualization functions
β βββ utils.py # Utility functions
βββ tests/ # Test suite
β βββ test_explainers.py
β βββ test_plots.py
β βββ test_utils.py
βββ docs/ # Documentation & tutorials
β βββ tutorials/ # Jupyter notebook tutorials
βββ pyproject.toml # Package configuration
βββ README.md # This file
π§ This is starter code for DFI development. The core structure and API are in place, but full implementations are coming soon.
Current status:
- β Package structure established
- β Base classes and interfaces defined
- β Testing framework set up
- β Documentation structure created
- π§ Core algorithms (in development)
- π§ Visualization functions (in development)
Run the test suite:
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=fdfi --cov-report=htmlFull documentation and tutorials are available in the docs/ directory:
- Quickstart Tutorial
- OT Explainer Tutorial
- EOT Explainer Tutorial
- Flow Explainer Tutorial
- Confidence Intervals
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
FDFI is based on:
- Du, J.-H., Roeder, K., & Wasserman, L. (2025). Disentangled Feature Importance. arXiv preprint arXiv:2507.00260.
- Chen, X., Guo, Y., & Du, J.-H. (2026). Flow-Disentangled Feature Importance. In The Thirteenth International Conference on Learning Representations (ICLR).
Related work:
- SHAP: A game theoretic approach to explain machine learning models
If you use DFI in your research, please cite:
@software{dfi2026,
title={DFI: Python Library for Disentangled Feature Importance},
author={DFI Team},
year={2026},
url={https://github.com/jaydu1/FDFI}
}
@article{du2025disentangled,
title={Disentangled Feature Importance},
author={Du, Jin-Hong and Roeder, Kathryn and Wasserman, Larry},
journal={arXiv preprint arXiv:2507.00260},
year={2025}
}
@inproceedings{chen2026flow,
title={Flow-Disentangled Feature Importance},
author={Chen, Xin and Guo, Yifan and Du, Jin-Hong},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2026}
}For questions and issues, please use the GitHub issue tracker.