This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of gravelly soils. This model is developed using LightGBM and SHAP.
-
Updated
Mar 28, 2024 - Jupyter Notebook
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of gravelly soils. This model is developed using LightGBM and SHAP.
A new benchmark for graph neural network explainer methods
Use of Machine Learning and Deep Learning Algorithms to recommend best clinical options to health professionals in South Africa
XMLX GitHub configuration
Graduate research project in computer vision and deep learning explainability
This repository contains a comprehensive implementation of gradient descent for linear regression, including visualizations and comparisons with ordinary least squares (OLS) regression. It also includes an additional implementation for multiple linear regression using gradient descent.
Predicting categories of scientific papers with advanced machine learning techniques involving class imbalance in multi-label data and explainable machine learning.
Implementation of Model-Agnostic Graph Explainability Technique from Scratch in PyTorch
Getting explanations for predictions made by black box models.
[Frontiers in AI Journal] Implementation of the paper "Interpreting Vision and Language Generative Models with Semantic Visual Priors"
Ths repo has the list of Interesting Literature in the domain of XAI
How to use SHAP to interpret machine learning models
Final year project, exploring the field of quantum machine learning.
This module extends the kernel SHAP method (as introduced by Lundberg and Lee (2017)) which is local in nature, to a method that computes global SHAP values.
A Novel Optimization Objective for Explainable and Customizable Learning of Multi-Classifiers
Explanation-guided boosting of machine learning evasion attacks.
Explaining sentiment classification by generating synthetic exemplars and counter-exemplars in the latent space
BBBP Explainer is a code to generate structural alerts of blood-brain barrier penetrating and non-penetrating drugs using Local Interpretable Model-Agnostic Explanations (LIME) of machine learning models from BBBP dataset.
Code for the School of AI challenge "Explainable AI for Wildfire Forecasting", sponsored by Pi School to help NOA, the National Observatory of Athens, work with Explainable Deep Learning for Wildfire Forecasting.
Add a description, image, and links to the explainable-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the explainable-machine-learning topic, visit your repo's landing page and select "manage topics."