A package for Counterfactual Explanations and Algorithmic Recourse in Julia.
-
Updated
Sep 24, 2024 - Julia
A package for Counterfactual Explanations and Algorithmic Recourse in Julia.
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.
This is an open-source tool to assess and improve the trustworthiness of AI systems.
Comparing CNN and ViT explainability with MLOps-based environment
Repository for the familiar R-package. Familiar implements an end-to-end pipeline for interpretable machine learning of tabular data.
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
A Python library for explainable AI using approximate reasoning
User documentation for KServe.
Fit interpretable models. Explain blackbox machine learning.
Generating Explanations for Puzzles Solved with Automated Planning Using Planning Ontology and NLP
Increase interpretability of your models!
Concise summaries of key papers in responsible AI.
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
My github page
[EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"
Scene Graph Generation in Autonomous Driving: a neuro-symbolic approach
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
Advanced AI Explainability for computer vision.All the non Gradient Methods.
Add a description, image, and links to the explainable-ai topic page so that developers can more easily learn about it.
To associate your repository with the explainable-ai topic, visit your repo's landing page and select "manage topics."