Official code for paper: Contrastive Graph Modeling for Cross-domain Few-shot Medical Image Segmentation
- [News!] 2025-06-03: We have uploaded the full code.
- [News!] 2025-06-13: We have uploaded the model weights and prediction maps. As of now, all our experimental code and results have been open-sourced. We are still actively updating this repository for better result presentation. Stay tuned!
- [News!] 2025-12-25 🎄: Our paper has been accepted for publication in IEEE Transactions on Medical Imaging! 🎅🎁
- Release model code.
- Release model weights.
- Release model prediction maps.
TL;DR: Anatomical structures are highly consistent across medical imaging domains and can be modeled as graphs. Compared to domain information filtering methods [1] [2], our approach yields superior cross-domain generalization while preserving strong source-domain specialization.
Cross-domain few-shot medical image segmentation (CD-FSMIS) offers a promising and data-efficient solution for medical applications where annotations are severely scarce and multimodal analysis is required. However, existing methods typically filter out domain-specific information to improve generalization, which inadvertently limits cross-domain performance and degrades source-domain accuracy. To address this, we present Contrastive Graph Modeling (C-Graph), a framework that leverages the structural consistency of medical images as a reliable domain-transferable prior. We represent image features as graphs, with pixels as nodes and semantic affinities as edges. A Structural Prior Graph (SPG) layer is proposed to capture and transfer target-category node dependencies and enable global structure modeling through explicit node interactions. Building upon SPG layers, we introduce a Subgraph Matching Decoding (SMD) mechanism that exploits semantic relations among nodes to guide prediction. Furthermore, we design a Confusion-minimizing Node Contrast (CNC) loss to mitigate node ambiguity and subgraph heterogeneity by contrastively enhancing node discriminability in the graph space. Our method significantly outperforms prior CD-FSMIS approaches across multiple cross-domain benchmarks, achieving state-of-the-art performance while simultaneously preserving strong segmentation accuracy on the source domain.
Please install the following essential dependencies:
dcm2nii
json5==0.8.5
jupyter==1.0.0
nibabel==2.5.1
numpy==1.22.0
opencv-python==4.5.5.62
Pillow>=8.1.1
sacred==0.8.2
scikit-image==0.18.3
SimpleITK==1.2.3
torch==1.10.2
torchvision=0.11.2
tqdm==4.62.3
Please download:
- Abdominal MRI: Combined Healthy Abdominal Organ Segmentation dataset
- Abdominal CT: Multi-Atlas Abdomen Labeling Challenge
- Cardiac LGE and b-SSFP: Multi-sequence Cardiac MRI Segmentation dataset
Pre-processing is performed according to Ouyang et al. and we follow the procedure on their GitHub repository.
- Compile
./data/supervoxels/felzenszwalb_3d_cy.pyxwith cython (python ./data/supervoxels/setup.py build_ext --inplace) and run./data/supervoxels/generate_supervoxels.py - Download pre-trained ResNet-50 weights deeplabv3 version and put your checkpoints folder, then replace the absolute path in the code
./models/encoder.py. - Run
./script/train_<direction>.sh, for example:./script/train_ct2mr.sh
-
(Optional) You can download our pretrained models for different domains:
- Abdominal CT: Google Drive
- Abdominal MRI: Google Drive
- Cardiac LGE: Google Drive
- Cardiac b-SSFP: Google Drive
After downloading, update the path accordingly in the test script.
-
Run the following script to perform inference:
./script/test_<direction>.sh -
🖼️ Prediction maps for the four cross-domain directions are available here — perfect for a quick glance!
Our code is built upon the works of SSL-ALPNet, ADNet and ViG, we appreciate the authors for their excellent contributions!
If you use this code for your research or project, please consider citing our paper. Thanks!🥂:
@article{bo2025CGraph,
title={Contrastive Graph Modeling for Cross-Domain Few-Shot Medical Image Segmentation},
author={Yuntian Bo and Tao Zhou and Zechao Li and Haofeng Zhang and Ling Shao},
year={2025},
eprint={2512.21683},
archivePrefix={arXiv},
primaryClass={cs.CV},
}
