Skip to content

Latest commit

 

History

History
143 lines (127 loc) · 14 KB

CHANGELOG.md

File metadata and controls

143 lines (127 loc) · 14 KB

Changelog

[Unreleased]

Full changelog

Features

Fixes

v0.3.3 - 2024-05-25

Full changelog

Features

  • Changed how probabilistic regression is done to handle both validity and speed by dividing the calibration set into two sets to allow pre-computation of the CPS. Credits to anonymous reviewer for this suggestion.
  • Added updated regression experiments and plotting for revised paper.
  • Added a new under the hood demo notebook to show how to access the information used in the plots, like conditions and uncertainties etc.

Fixes

  • Several minor updates to descrptions and notebooks in the repository.

v0.3.2 - 2024-04-14

Full changelog

Features

  • Added Fairness experiments and plotting for the XAI 2024 paper. Added a Fairness tag for the weblinks.
  • Added multi-class experiments and plotting for upcoming submissions. Added a Multi-class tag for weblinks.
  • Some improvements were made to the multi-class functionality. The updates included updating the VennAbers class to a more robust handling of multi-class (with or without Mondrian bins).

Fixes

  • Updated the requirement for crepes to v0.6.2, to address known issues with some versions of python.
  • The pythonpath for pytest was added to pyprojects.toml to avoid module not found error when running pytest locally.

v0.3.1 - 2024-02-23

Full changelog

Features

  • Added support for Mondrian explanations, using the bins attribute. The bins attribute takes a categorical feature of the size of the calibration or test set (depending on context) indicating the category of each instance. For continuous attributes, the crepes.extras.binningcan be used to define categories through binning.
  • Added BinaryRegressorDiscretizer and RegressorDiscretizer which are similar to BinaryEntropyDiscretizer and EntropyDiscretizer in that it uses a decision tree to identify suitable discretizations for numerical features. explain_factual and explain_counterfactual have been updated to use these discretizers for regression by default. In a future version, the possibility to assign your own discretizer may be removed.
  • Updated the Further reading and citing section in the README:
    • Updated the reference and bibtex to the published version of the introductory paper:
      • Löfström, H., Löfström, T., Johansson, U., and Sönströd, C. (2024). Calibrated Explanations: with Uncertainty Information and Counterfactuals. Expert Systems with Applications, 1-27.

      • @article{lofstrom2024calibrated,
          title = 	{Calibrated explanations: With uncertainty information and counterfactuals},
          journal = 	{Expert Systems with Applications},
          pages = 	{123154},
          year = 	{2024},
          issn = 	{0957-4174},
          doi = 	{https://doi.org/10.1016/j.eswa.2024.123154},
          url = 	{https://www.sciencedirect.com/science/article/pii/S0957417424000198},
          author = 	{Helena Löfström and Tuwe Löfström and Ulf Johansson and Cecilia Sönströd},
          keywords = 	{Explainable AI, Feature importance, Calibrated explanations, Venn-Abers, Uncertainty quantification, Counterfactual explanations},
          abstract = 	{While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.}
        }
    • Added Code and results for the Investigating the impact of calibration on the quality of explanations paper, inspiring the idea behind Calibrated Explanations.
    • Added a bibtex to the software repository:
      • @software{Lofstrom_Calibrated_Explanations_2024,
          author = 	{Löfström, Helena and Löfström, Tuwe and Johansson, Ulf and Sönströd, Cecilia and Matela, Rudy},
          license = 	{BSD-3-Clause},
          title = 	{Calibrated Explanations},
          url = 	{https://github.com/Moffran/calibrated_explanations},
          version = 	{v0.3.1},
          month = 	feb,
          year = 	{2024}
        }
    • Updated the docs/citing.md with the above changes.
  • Added a CITATION.cff with citation data for the software repository.

Fixes

  • Extended __repr__ to include additional fields when verbose=True.
  • Fixed a minor bug in the example provided in the README.md and the getting_started.md, as described in issue #26.
  • Added utils.transform_to_numeric and a clarification about known limitations in README.md as a response to issue #28.
  • Fixed a minor bug in FactualExplanation.__plot_probabilistic that was triggered when no features where to be shown.
  • Fixed a bug with the discretizers in core.
  • Fixed a bug with saving plots to file using the filename parameter.

v0.3.0 - 2024-01-02

Full changelog

Features

Fixes

  • Filtered out extreme target values in the quickstart notebook to make the regression examples more realistic.
  • Fixed bugs related to how plots can be saved to file.
  • Fixed an issue where add_conjunctions with max_rule_size=3 did not work.

v0.2.3 - 2023-11-04

Full changelog

Features

Fixes

  • Fix in CalibratedExplainer to ensure that greater-than works identical as less-than.
  • Bugfix in FactualExplanation._get_rules() which caused an error when categorical labels where missing.

v0.2.2 - 2023-10-03

Full changelog

Fixes

Smaller adjustments and fixes.

v0.2.1 - 2023-09-20

Full changelog

Fixes

The wrapper file with helper classes CalibratedAsShapExplainer and CalibratedAsLimeTabularExplanainer has been removed. The as_shap and as_lime functions are still working.

v0.2.0 - 2023-09-19

Full changelog

Features

  • Added a WrapCalibratedExplainer class which can be used for both classificaiton and regression.
  • Added quickstart_wrap to the notebooks folder.
  • Added LIME_comparison to the notebooks folder.

Fixes

  • Removed the dependency on shap and scikit-learn and closed issue #8.
  • Updated the weights to match LIME's weights (to ensure that a positive weight has the same meaning in both).
  • Changed name of parameter y (representing the threshold in probabilistic regression) to threshold.

v0.1.1 - 2023-09-14

Full changelog

Features

  • Exchanged the slow VennABERS_by_def function for the VennAbers class in the venn-abers package.

Fixes

  • Low and high weights are correctly assigned, so that low < high is always the case.
  • Adjusted the number of decimals in counterfactual rules to 2.

v0.1.0 - 2023-09-04

Full changelog

Features

  • Performance: Fast, reliable, stable and robust feature importance explanations.
  • Calibrated Explanations: Calibration of the underlying model to ensure that predictions reflect reality.
  • Uncertainty Quantification: Uncertainty quantification of the prediction from the underlying model and the feature importance weights.
  • Interpretation: Rules with straightforward interpretation in relation to the feature weights.
  • Factual and Counterfactual Explanations: Possibility to generate counterfactual rules with uncertainty quantification of the expected predictions achieved.
  • Conjunctive Rules: Conjunctive rules conveying joint contribution between features.
  • Multiclass Support: Multiclass support has been added since the original version developed for the paper Calibrated Explanations: with Uncertainty Information and Counterfactuals.
  • Regression Support: Support for explanations from standard regression was developed and is described in the paper Calibrated Explanations for Regression.
  • Probabilistic Regression Support: Support for probabilistic explanations from standard regression was added together with regression and is described in the paper mentioned above.
  • Conjunctive Rules: Since the original version, conjunctive rules has also been added.
  • Code Structure: The code structure has been improved a lot. The CalibratedExplainer, when applied to a model and a collection of test instances, creates a collection class, CalibratedExplanations, holding CalibratedExplanation objects, which are either FactualExplanation or CounterfactualExplanation objects. Operations can be applied to all explanations in the collection directly through CalibratedExplanations or through each individual CalibratedExplanation (see the documentation).

Fixes

Numerous. The code has been refactored and improved a lot since the original version. The code is now also tested and documented.