This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of gravelly soils. This model is developed using LightGBM and SHAP.
-
Updated
Mar 28, 2024 - Jupyter Notebook
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of gravelly soils. This model is developed using LightGBM and SHAP.
A new benchmark for graph neural network explainer methods
Building a model is just one piece of the puzzle in data science; explaining how it works is just as important, especially in finance where transparency and explainability is key.
Use of Machine Learning and Deep Learning Algorithms to recommend best clinical options to health professionals in South Africa
XMLX GitHub configuration
Graduate research project in computer vision and deep learning explainability
This repository contains a comprehensive implementation of gradient descent for linear regression, including visualizations and comparisons with ordinary least squares (OLS) regression. It also includes an additional implementation for multiple linear regression using gradient descent.
The objective is to ascertain the probability of an individual being susceptible to a severe heart problem based on some features.
Predicting categories of scientific papers with advanced machine learning techniques involving class imbalance in multi-label data and explainable machine learning.
Implementation of Model-Agnostic Graph Explainability Technique from Scratch in PyTorch
Final year project, exploring the field of quantum machine learning.
This project aims to leverage machine learning techniques to predict employee attrition, allowing organizations to identify at-risk employees and implement strategies to improve retention rates.
Ths repo has the list of Interesting Literature in the domain of XAI
Getting explanations for predictions made by black box models.
Robustness of Global Feature Effect Explanations (ECML PKDD 2024)
How to use SHAP to interpret machine learning models
This module extends the kernel SHAP method (as introduced by Lundberg and Lee (2017)) which is local in nature, to a method that computes global SHAP values.
Explanation-guided boosting of machine learning evasion attacks.
A Novel Optimization Objective for Explainable and Customizable Learning of Multi-Classifiers
Explaining sentiment classification by generating synthetic exemplars and counter-exemplars in the latent space
Add a description, image, and links to the explainable-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the explainable-machine-learning topic, visit your repo's landing page and select "manage topics."