Skip to content

This repository provides the training codes to classify aerial images using a custom-built model (transfer learning with InceptionResNetV2 as the backbone) and explainers to explain the predictions with LIME and GradCAM on an interface that lets you upload or paste images for classification and see visual explanations.

Notifications You must be signed in to change notification settings

Purushothaman-natarajan/eXplainable-AI-for-Image-Classification-on-Remote-Sensing

Repository files navigation

AID Scene Classification on Remote Sensing Data from Google Earth, Leveraging Transfer Learning, and Interpretation of Predictions Using LIME and GradCAM.

This repository contains code (ipynb) for a scene classification model using transfer learning on the AID dataset (https://www.kaggle.com/datasets/jiayuanchengala/aid-scene-classification-datasets), along with functionalities for explaining predictions using LIME (Local Interpretable Model-Agnostic Explanations) and GradCAM (Gradient-weighted Class Activation Mapping). Additionally, a user interface built with Gradio allows users to upload or paste images for classification and visual explanation.

Project Overview

  • Model: A pre-trained convolutional neural network (CNN) is leveraged (transfer learning) for classifying AID scene images into 30 categories.
  • Explainability: LIME and GradCAM are used to provide insights into the model's decision-making process for individual predictions.
  • User Interface: Gradio offers a user-friendly interface for uploading/pasting images, receiving predictions with probabilities, labels, and visual explanations generated by LIME and GradCAM.

Dependencies

This project requires the following Python libraries:

  • TensorFlow
  • Keras
  • LIME
  • grad-cam
  • Gradio

Installation:

pip install tensorflow keras lime grad-cam gradio

Usage

  1. Install dependencies: (See above)

  2. Prepare data:

  3. Train the model and predict (Image Classifier):

    • Train the model yourself or we will upload the model/provide the drive link to the model, the processed data and its label very soon.
    • Partial Classification Model: 11_Class_Classification_Model.ipynb with 11 class labels.
    • Classification Model: Classification_Model.ipynb with 30 class labels.
    • Epoch-wise loss and accuracy is given below.

  1. Explainers:

    • LIME: It perturbs the input data and fits it to a linear model to explain the predictions from the Image Classifier.
    • Grad-CAM: Uses Class Activation Mapping to provide a visual explanation about the features that's highly contributing for the prediction.
    • For Explainer codes refer : Explainer_and_its_Interface.ipynb
  2. Interface:

    • Upload an image or paste an image URL in the provided field.
    • Click "Predict" to receive the model's prediction with probability, class label, and visual explanations using LIME and GradCAM.
    • For Interface codes refer : Explainer_and_its_Interface.ipynb
    • Interface preview is depicted below:



Interface Preview

Notes

  • This readme provides a general overview. Refer to the code within the repository (.ipynb) for detailed implementation and configuration options.
  • The pre-trained model and any additional data files might be stored separately. Modify the code to point to the correct locations.

This project offers a starting point for exploring transfer learning, explainable AI techniques, and building user interfaces for image classification and explanation tasks on the AID dataset(aerial image dataset).

About

This repository provides the training codes to classify aerial images using a custom-built model (transfer learning with InceptionResNetV2 as the backbone) and explainers to explain the predictions with LIME and GradCAM on an interface that lets you upload or paste images for classification and see visual explanations.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published