v0.3.3 - 2024-05-25
- Changed how probabilistic regression is done to handle both validity and speed by dividing the calibration set into two sets to allow pre-computation of the CPS. Credits to anonymous reviewer for this suggestion.
- Added updated regression experiments and plotting for revised paper.
- Added a new
under the hood
demo notebook to show how to access the information used in the plots, like conditions and uncertainties etc.
- Several minor updates to descrptions and notebooks in the repository.
v0.3.2 - 2024-04-14
- Added Fairness experiments and plotting for the XAI 2024 paper. Added a
Fairness
tag for the weblinks. - Added multi-class experiments and plotting for upcoming submissions. Added a
Multi-class
tag for weblinks. - Some improvements were made to the multi-class functionality. The updates included updating the VennAbers class to a more robust handling of multi-class (with or without Mondrian bins).
- Updated the requirement for crepes to v0.6.2, to address known issues with some versions of python.
- The pythonpath for pytest was added to pyprojects.toml to avoid module not found error when running pytest locally.
v0.3.1 - 2024-02-23
- Added support for Mondrian explanations, using the
bins
attribute. Thebins
attribute takes a categorical feature of the size of the calibration or test set (depending on context) indicating the category of each instance. For continuous attributes, thecrepes.extras.binning
can be used to define categories through binning. - Added
BinaryRegressorDiscretizer
andRegressorDiscretizer
which are similar toBinaryEntropyDiscretizer
andEntropyDiscretizer
in that it uses a decision tree to identify suitable discretizations for numerical features.explain_factual
andexplain_counterfactual
have been updated to use these discretizers for regression by default. In a future version, the possibility to assign your own discretizer may be removed. - Updated the Further reading and citing section in the README:
- Updated the reference and bibtex to the published version of the introductory paper:
-
Löfström, H., Löfström, T., Johansson, U., and Sönströd, C. (2024). Calibrated Explanations: with Uncertainty Information and Counterfactuals. Expert Systems with Applications, 1-27.
-
@article{lofstrom2024calibrated, title = {Calibrated explanations: With uncertainty information and counterfactuals}, journal = {Expert Systems with Applications}, pages = {123154}, year = {2024}, issn = {0957-4174}, doi = {https://doi.org/10.1016/j.eswa.2024.123154}, url = {https://www.sciencedirect.com/science/article/pii/S0957417424000198}, author = {Helena Löfström and Tuwe Löfström and Ulf Johansson and Cecilia Sönströd}, keywords = {Explainable AI, Feature importance, Calibrated explanations, Venn-Abers, Uncertainty quantification, Counterfactual explanations}, abstract = {While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.} }
-
- Added Code and results for the Investigating the impact of calibration on the quality of explanations paper, inspiring the idea behind Calibrated Explanations.
- Added a bibtex to the software repository:
-
@software{Lofstrom_Calibrated_Explanations_2024, author = {Löfström, Helena and Löfström, Tuwe and Johansson, Ulf and Sönströd, Cecilia and Matela, Rudy}, license = {BSD-3-Clause}, title = {Calibrated Explanations}, url = {https://github.com/Moffran/calibrated_explanations}, version = {v0.3.1}, month = feb, year = {2024} }
-
- Updated the docs/citing.md with the above changes.
- Updated the reference and bibtex to the published version of the introductory paper:
- Added a CITATION.cff with citation data for the software repository.
- Extended
__repr__
to include additional fields whenverbose=True
. - Fixed a minor bug in the example provided in the README.md and the getting_started.md, as described in issue #26.
- Added
utils.transform_to_numeric
and a clarification about known limitations in README.md as a response to issue #28. - Fixed a minor bug in
FactualExplanation.__plot_probabilistic
that was triggered when no features where to be shown. - Fixed a bug with the discretizers in
core
. - Fixed a bug with saving plots to file using the
filename
parameter.
v0.3.0 - 2024-01-02
- Updated to version 1.4.1 of venn_abers. Added
precision=4
to the fitting of the venn_abers model to increase speed. - Preparation for weighted categorical rules implemented but not yet activated.
- Added a state-of-the-art comparison with scripts and notebooks for evaluating the performance of the method in comparison with
LIME
andSHAP
: see Classification_Experiment_sota.py and Classification_Analysis_sota.ipynb for running and evaluating the experiment. Unzip results_sota.zip and run Classification_Analysis_sota.ipynb to get the results used in the paper Calibrated Explanations: with Uncertainty Information and Counterfactuals. - Updated the parameters used by
plot_all
andplot_explanation
.
- Filtered out extreme target values in the quickstart notebook to make the regression examples more realistic.
- Fixed bugs related to how plots can be saved to file.
- Fixed an issue where add_conjunctions with
max_rule_size=3
did not work.
v0.2.3 - 2023-11-04
- Added an evaluation folder with scripts and notebooks for evaluating the performance of the method.
- One evaluation focuses on stability and robustness of the method: see Classification_Experiment_stab_rob.py and Classification_Analysis_stab_rob.ipynb for running and evaluating the experiment.
- One evaluation focuses on how different parameters affect the method regarding time and robustness: see Classification_Experiment_Ablation.py and Classification_Analysis_Ablation.ipynb for running and evaluating the experiment.
- Fix in
CalibratedExplainer
to ensure that greater-than works identical as less-than. - Bugfix in
FactualExplanation._get_rules()
which caused an error when categorical labels where missing.
v0.2.2 - 2023-10-03
Smaller adjustments and fixes.
v0.2.1 - 2023-09-20
The wrapper file with helper classes CalibratedAsShapExplainer
and CalibratedAsLimeTabularExplanainer
has been removed. The as_shap
and as_lime
functions are still working.
v0.2.0 - 2023-09-19
- Added a
WrapCalibratedExplainer
class which can be used for both classificaiton and regression. - Added quickstart_wrap to the notebooks folder.
- Added LIME_comparison to the notebooks folder.
- Removed the dependency on
shap
andscikit-learn
and closed issue #8. - Updated the weights to match LIME's weights (to ensure that a positive weight has the same meaning in both).
- Changed name of parameter
y
(representing the threshold in probabilistic regression) tothreshold
.
v0.1.1 - 2023-09-14
- Exchanged the slow
VennABERS_by_def
function for theVennAbers
class in thevenn-abers
package.
- Low and high weights are correctly assigned, so that low < high is always the case.
- Adjusted the number of decimals in counterfactual rules to 2.
v0.1.0 - 2023-09-04
- Performance: Fast, reliable, stable and robust feature importance explanations.
- Calibrated Explanations: Calibration of the underlying model to ensure that predictions reflect reality.
- Uncertainty Quantification: Uncertainty quantification of the prediction from the underlying model and the feature importance weights.
- Interpretation: Rules with straightforward interpretation in relation to the feature weights.
- Factual and Counterfactual Explanations: Possibility to generate counterfactual rules with uncertainty quantification of the expected predictions achieved.
- Conjunctive Rules: Conjunctive rules conveying joint contribution between features.
- Multiclass Support: Multiclass support has been added since the original version developed for the paper Calibrated Explanations: with Uncertainty Information and Counterfactuals.
- Regression Support: Support for explanations from standard regression was developed and is described in the paper Calibrated Explanations for Regression.
- Probabilistic Regression Support: Support for probabilistic explanations from standard regression was added together with regression and is described in the paper mentioned above.
- Conjunctive Rules: Since the original version, conjunctive rules has also been added.
- Code Structure: The code structure has been improved a lot. The
CalibratedExplainer
, when applied to a model and a collection of test instances, creates a collection class,CalibratedExplanations
, holdingCalibratedExplanation
objects, which are eitherFactualExplanation
orCounterfactualExplanation
objects. Operations can be applied to all explanations in the collection directly throughCalibratedExplanations
or through each individualCalibratedExplanation
(see the documentation).
Numerous. The code has been refactored and improved a lot since the original version. The code is now also tested and documented.