A preprocessing-free, lesion-aware deep learning framework for robust atlas registration.
This repository contains a deep-learning atlas registration framework designed for pathological images, with a special focus on cases where lesions have no anatomical counterpart in the atlas. The method operates directly on native medical images—no preprocessing or lesion masks required—and robustly handles missing correspondences using distance-map–based similarity and a volume-preserving loss. It supports one-shot overfitting for patient-specific refinement and achieves high-accuracy, anatomically plausible registrations across multi-centre clinical datasets. The framework enables reproducible cohort-level spatial analyses and has been successfully applied to melanoma brain metastases across multiple institutions.
Please note: There is currently no maintained main branch, please check out the refactoring branch instead.
This is a highly refactored fork of Aladdin that I adapted to my needs for atlas registration. Here you can find the original work:
Aladdin: Joint Atlas Building and Diffeomorphic Registration Learning with Pairwise Alignment
Zhipeng Ding and Marc Niethammer
CVPR 2022 eprint arxiv
- Preprocessing-free: Works directly on native medical images
- Lesion-aware: Handles cases where lesions have no anatomical counterpart in the atlas
- Robust registration: Uses distance-map-based similarity and volume-preserving loss
- Multi-center support: Achieves high-accuracy registrations across clinical datasets
- Flexible training: Supports calssical model training as well as one-shot overfitting for patient-specific refinement
For managing dependencies we use Poetry. After checking out the repository call:
poetry install
To activate the environment call:
poetry shell
The main entry point for the framework is TrainAtlas.py. This script provides multiple modes of operation through command-line arguments:
To start training a new model, run:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.jsonTo test a trained model with the best checkpoint, use:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.json -tIn testing mode the framwork will store the results in the outputPath configured in the config-file.
To make predictions with a trained model, use:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.json -pIn prediction mode the framwork will store the results for each input dataset in the folder of the respective dataset.
To resume training from a specific checkpoint, use:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.json -r path/to/checkpoint.txtTo test image sampling functionality (e.g., for debugging), use:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.json -s 10This will sample 10 images from the dataset.
To run hyperparameter optimization with TUNE:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.json -oTo analyze results from a hyperparameter search:
python ./code/TrainAtlas.py -c ./Path/to/your/config/file.json -aThe framework uses JSON configuration files to specify all parameters. A sample configuration file can be found in the resources directory. The configuration file includes parameters for:
- Data paths and settings
- Network architecture
- Loss functions
- Optimization settings
- Logging and checkpointing
- Registration grid parameters
- Learning rates
- Loss weights
