Siamese UNet for Imbalanced Binary Change Detection using EO-SAR Images
This repository contains the solution for the GalaxEye Satellite AI Research Intern assessment. The task involves performing pixel-level binary change detection (0 = No-Change, 1 = Change) given co-registered pre-event (EO) and post-event (SAR) image pairs.
The solution implements a Siamese UNet architecture with weight-shared ResNet-34 encoders to extract multi-modal features, absolute difference-based skip fusion, and a combined BCE+Dice loss function to tackle severe class imbalance (the dataset exhibits a 58:1 ratio of no-change to change pixels).
- Python 3.11+
- See
requirements.txtfor all pinned dependencies.
Run the following commands to create a virtual environment and install dependencies:
# 1. Create a virtual environment
python3 -m venv .venv
# 2. Activate it
# On Mac/Linux:
source .venv/bin/activate
# On Windows:
# .venv\Scripts\activate
# 3. Install requirements
pip install -r requirements.txtPlace the provided dataset inside a folder named Dataset/ at the root of the repository. The expected directory layout is:
Galaxyeyeai/
├── Dataset/
│ ├── train/
│ │ ├── pre-event/
│ │ ├── post-event/
│ │ └── target/
│ ├── val/
│ │ ├── pre-event/
│ │ ├── post-event/
│ │ └── target/
│ └── test/
│ ├── pre-event/
│ ├── post-event/
│ └── target/
├── src/
├── config.yaml
├── train.py
├── eval.py
└── ...
Note: The script automatically handles the mandatory label remapping (0/1 -> 0, 2/3 -> 1).
To train the model from scratch, execute:
python train.py --config config.yamlCheckpoints will be saved automatically to checkpoints/best.pth and checkpoints/last.pth. TensorBoard logs are stored in runs/.
To evaluate the model on the test data (or any split) and generate metrics + visualisations, run:
python eval.py --data_path Dataset/test --weights checkpoints/best.pthResults will be saved inside the results/ folder, including JSON metrics, confusion matrices, and qualitative image comparisons.
(Insert public link to your model weights here. e.g., Google Drive or Hugging Face Hub link)
- Link: Hugging Face Repository
These metrics are placeholders, assuming the model has finished training on local hardware.
| Split | IoU | Precision | Recall | F1 Score |
|---|---|---|---|---|
| Validation | 24.74% | 26.45% | 79.29% | 39.67% |
| Test | 3.50% | 4.19% | 17.37% | 6.75% |
- UNet: Ronneberger et al., "U-Net: Convolutional Networks for Biomedical Image Segmentation", MICCAI 2015.
- Siamese Networks for CD: Daudt et al., "Fully Convolutional Siamese Networks for Change Detection", ICIP 2018.
- Albumentations: Buslaev et al., "Albumentations: fast and flexible image augmentations", Information 2020.
- Timm: Ross Wightman, "PyTorch Image Models", GitHub.