Measuring galaxy environmental distance scales with GNNs and explainable ML models
-
Updated
May 30, 2024 - Jupyter Notebook
Measuring galaxy environmental distance scales with GNNs and explainable ML models
Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles
gradient-boosted regression and decision tree models on behavioural animal data
This project aims to predict bank customer churn using a dataset derived from the Bank Customer Churn Prediction dataset available on Kaggle. The dataset for this competition has been generated from a deep learning model trained on the original dataset, with feature distributions being similar but not identical to the original data.
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of gravelly soils. This model is developed using LightGBM and SHAP.
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of soils. This model is developed using XGBoost and SHAP.
Code for my thesis about SHAP. Implementation of DecisionTree, SVM, BERT on 2 Datasets Imdb and Argument Mining
Determining Feature Importance by Integrating Random Forest and SHAP in Python
📊🛰️ Data processing scripts, ML models, and Explainable AI results created as part of my Masters Thesis @ Johns Hopkins
Code for EACL Workshop paper Can BERT eat RuCoLA? Topological Data Analysis to Explain
AI applications can be found in various real-world systems, including vehicle system design and real-time car accident prediction. There is an increasing need to better explain AI-driven processes, especially in terms of potential legal disputes that might result from AI decisions. This analysis addresses this explainability and legal issues.
👨💻 This repository shows how machine learning and SHAP can be leveraged to understand the reasons of production downtime ⌛
Coding challenge for a job interview examining the predictors of vehicle accident severity using GB Road Safety Data
An Analysis of Lassa Fever Outbreaks in Nigeria using Machine Learning Models and Shapley Values
Understanding the limitations of Gassmann's fluid substitution model using explainable ML
In this repository you will fine explainability of machine learning models.
Frontend for ShapEmotionsCorrectionAPI
The goal of SHAP is to explain the prediction of an instance x by computing the contribution of each feature to the prediction. The SHAP explanation method computes Shapley values from coalitional game theory. The feature values of a data instance act as players in a coalition.
Android malware detection using machine learning.
Add a description, image, and links to the shapley-additive-explanations topic page so that developers can more easily learn about it.
To associate your repository with the shapley-additive-explanations topic, visit your repo's landing page and select "manage topics."