Skip to content

songsnim/Img2Tab_pytorch

Repository files navigation

Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN Features for Explainable Image Classification

License: GPL v3

Traditional tabular classifiers provide explainable decision-making with interpretable features(concepts). However, using their explainability in vision tasks has been limited due to the pixel representation of images. In this paper, we design Img2Tabs that classify images by concepts to harness the explainability of tabular classifiers. Img2Tabs encode image pixels into tabular features by StyleGAN inversion. Since not all of the resulting features are class-relevant or interpretable due to their generative nature, we expect Img2Tab classifiers to discover class-relevant concepts automatically from the StyleGAN features. Thus, we propose a novel method using the Wasserstein-1 metric to quantify class-relevancy and interpretability simultaneously. Using this method, we investigate whether important features extracted by tabular classifiers are class-relevant concepts. Consequently, we determine the most effective classifier for Img2Tabs in terms of discovering class-relevant concepts automatically from StyleGAN features. In evaluations, we demonstrate concept-based explanations through importance and visualization. Img2Tab achieves top-1 accuracy that is on par with CNN classifiers and deep feature learning baselines. Additionally, we show that users can easily debug Img2Tab classifiers at the concept level to ensure unbiased and fair decision-making without sacrificing accuracy.

1. Description

Official demo implementation of "Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN Features for Explainable Image Classification" paper. Img2Tab_demo.ipynb contains the following:

- Concept-based prediction
- Measuring concept importance
- Visualizing important concepts
- Presenting concepts with top-5 Wasserstein-1 metric.
- Concept-based debugging to exclude specific unwanted concepts.

2. Prerequisites and dependencies

  • Linux
  • Python 3.9.7
  • PyTorch 1.12.1
  • NVIDIA GPU + CUDA CuDNN (CPU is available, but not computationally feasible.)
  • Cuda version 11.4
  • Dependency details are provided in Img2Tab_env.yaml

Installation

  • Clone the repository:
git clone https://github.com/songsnim/Img2Tab_pytorch
cd Img2Tab

2. Img2Tab pre-processed datasets

We provide pre-encoded $\Psi$ and corresponding labels as well as standardized $\Psi$.

Path Description
Pre-encoded sets with CelebA This links provide pre-encoded $\Psi$ sets by Img2Tab encoder.

Download all files from the link into datasets folder.

3. Img2Tab modules

Download pre-trained Img2Tab modules below.

Path Description
FFHQ Inversion FFHQ e4e encoder. This is main Img2Tab inversion networks
FFHQ StyleGAN Pre-trained StyleGAN models on FFHQ from rosinality.
IR-SE50 Model Pre-trained IR-SE50 model from TreB1eN for use in ID loss during training e4e.
MOCOv2 Model Pre-trained ResNet-50 model trained using MOCOv2 for use in e4e simmilarity loss for domains other then human faces during training e4e.
Face landmark Pre-trained dlib face landmark detector for use in CelebA face recognition. This file is in zip format and should be extracted to pretrained_models folder.

All these files are supposed to be downloaded in pretrained_models folder.

4. Run Img2Tab demo

Follow instruction in Img2Tab_demo.ipynb to try Img2Tab demo.

Acknowledgments

This code borrows heavily from encoder4editing

Citation

Please cite our paper via this link: Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN Features for Explainable Image Classification

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages