Interpretable graph classifications using Graph Convolutional Neural Network
-
Updated
Nov 7, 2020 - GLSL
Interpretable graph classifications using Graph Convolutional Neural Network
"XAI를 위한 Attribution Method 접근법 분석 및 동향 Analysis and Trend of Attribution Methods for XAI" 에서 사용한 코드와 예시를 공개
Deep_classiflie_db is the backend data system for managing Deep Classiflie metadata, analyzing Deep Classiflie intermediate datasets and orchestrating Deep Classiflie model training pipelines. Deep_classiflie_db includes data scraping modules for the initial model data sources. Deep Classiflie depends upon deep_classiflie_db for much of its anal…
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
A small repository to test Captum Explainable AI with a trained Flair transformers-based text classifier.
OdoriFy is an open-source tool with multiple prediction engines. This is the source code of the webserver.
Trained Neural Networks (LSTM, HybridCNN/LSTM, PyramidCNN, Transformers, etc.) & comparison for the task of Hate Speech Detection on the OLID Dataset (Tweets).
Interpretability Metrics
This repository contains the source code for Indoor Scene Detector, a full stack deep learning computer vision application.
Overview of different model interpretability libraries.
COVID-19 forecasting model for East Java cities using Joint Learning. My undergrad thesis.
End-to-end toxic Russian comment classification
Model interpretability for Explainable Artificial Intelligence
Collection of NLP model explanations and accompanying analysis tools
Deep Classiflie is a framework for developing ML models that bolster fact-checking efficiency. As a POC, the initial alpha release of Deep Classiflie generates/analyzes a model that continuously classifies a single individual's statements (Donald Trump) using a single ground truth labeling source (The Washington Post). For statements the model d…
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
XAI-Tris
Based on the papers "Interpretability Beyond Feature Attribution: QuantitativeTestingwithConceptActivationVectors(TCAV)" and Captum's instantiation https://captum.ai/docs/captum_insights, we developed this frontend for the Captum project based on the streamlit framework.
Add a description, image, and links to the captum topic page so that developers can more easily learn about it.
To associate your repository with the captum topic, visit your repo's landing page and select "manage topics."