XMLX GitHub configuration
-
Updated
Nov 23, 2021
XMLX GitHub configuration
El proyecto se centra en la destilación de conocimiento y técnicas de explicabilidad para mejorar el rendimiento de redes neuronales en imágenes naturales.
A curated list of explainability-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the explainability implications, challenges, and advancements surrounding these powerful models.
Code for the NLDB 2023 paper. Work partially funded by grant ANR-19-CE38-0011-03 from the French national research agency (ANR).
We've developed a powerful binary dog and cat image classifier, driven by advanced deep learning techniques, and enhanced its transparency using Local Interpretable Model-agnostic Explanations (LIME). Witness the magic as the model accurately predicts dog and cat images while LIME reveals the intricate decision-making process behind each result.
One of the firsts dataset level explanability libraries for 1d signal using GRAD-CAM++
Code for the paper Tětková et al.: Knowledge Graphs for Empirical Concept Retrieval (accepted to The 2nd World Conference on eXplainable Artificial Intelligence).
Awesome Heart Sound Analysis - A Survey
A curated list of awesome contrastive explanation in ML resources
Evaluation framework for post hoc explanation methods | Explainable AI (XAI)
Graduate research project in computer vision and deep learning explainability
This is a list of awesome prototype-based papers for explainable artificial intelligence.
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of gravelly soils. This model is developed using LightGBM and SHAP.
Repository for Kubach et al. bioRxiv/2019/804682 (2019)
[ 공모전 ] 코트라 데이터활용 빅데이터 분석 경진대회
Master's Thesis (Master's Degree in Artificial Intelligence and Robotics at Sapienza University of Rome) - 2023
SG-CF Shapelet-Guided Counterfactual Explanation for Time Series Data (2022 Big Data)
🤖 Making AI understandable and transparent, enhancing trust and accountability.
Endocrine Disruption Explainer is a code to generate structural alerts of endocrine disruption of chemcial compounds using Local Interpretable Model-Agnostic Explanations (LIME) of machine learning models from TOX-21, EDC, and EDKB-FDA datasets.
Add a description, image, and links to the explainable-artificial-intelligence topic page so that developers can more easily learn about it.
To associate your repository with the explainable-artificial-intelligence topic, visit your repo's landing page and select "manage topics."