Skip to content

[IEEE T-MI 2026] Contrastive Graph Modeling for Cross-Domain Few-Shot Medical Image Segmentation

Notifications You must be signed in to change notification settings

primebo1/C-Graph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

C-Graph

arXiv Xplore

Official code for paper: Contrastive Graph Modeling for Cross-domain Few-shot Medical Image Segmentation

  • [News!] 2025-06-03: We have uploaded the full code.
  • [News!] 2025-06-13: We have uploaded the model weights and prediction maps. As of now, all our experimental code and results have been open-sourced. We are still actively updating this repository for better result presentation. Stay tuned!
  • [News!] 2025-12-25 🎄: Our paper has been accepted for publication in IEEE Transactions on Medical Imaging! 🎅🎁

✅ TODO List

  • Release model code.
  • Release model weights.
  • Release model prediction maps.

🤓 Modeling Anatomy as Graphs


TL;DR: Anatomical structures are highly consistent across medical imaging domains and can be modeled as graphs. Compared to domain information filtering methods [1] [2], our approach yields superior cross-domain generalization while preserving strong source-domain specialization.

📋 Abstract

Cross-domain few-shot medical image segmentation (CD-FSMIS) offers a promising and data-efficient solution for medical applications where annotations are severely scarce and multimodal analysis is required. However, existing methods typically filter out domain-specific information to improve generalization, which inadvertently limits cross-domain performance and degrades source-domain accuracy. To address this, we present Contrastive Graph Modeling (C-Graph), a framework that leverages the structural consistency of medical images as a reliable domain-transferable prior. We represent image features as graphs, with pixels as nodes and semantic affinities as edges. A Structural Prior Graph (SPG) layer is proposed to capture and transfer target-category node dependencies and enable global structure modeling through explicit node interactions. Building upon SPG layers, we introduce a Subgraph Matching Decoding (SMD) mechanism that exploits semantic relations among nodes to guide prediction. Furthermore, we design a Confusion-minimizing Node Contrast (CNC) loss to mitigate node ambiguity and subgraph heterogeneity by contrastively enhancing node discriminability in the graph space. Our method significantly outperforms prior CD-FSMIS approaches across multiple cross-domain benchmarks, achieving state-of-the-art performance while simultaneously preserving strong segmentation accuracy on the source domain.

⏳ Quick start

🛠 Dependencies

Please install the following essential dependencies:

dcm2nii
json5==0.8.5
jupyter==1.0.0
nibabel==2.5.1
numpy==1.22.0
opencv-python==4.5.5.62
Pillow>=8.1.1
sacred==0.8.2
scikit-image==0.18.3
SimpleITK==1.2.3
torch==1.10.2
torchvision=0.11.2
tqdm==4.62.3

📚 Datasets and Preprocessing

Please download:

  1. Abdominal MRI: Combined Healthy Abdominal Organ Segmentation dataset
  2. Abdominal CT: Multi-Atlas Abdomen Labeling Challenge
  3. Cardiac LGE and b-SSFP: Multi-sequence Cardiac MRI Segmentation dataset

Pre-processing is performed according to Ouyang et al. and we follow the procedure on their GitHub repository.

🔥 Training

  1. Compile ./data/supervoxels/felzenszwalb_3d_cy.pyx with cython (python ./data/supervoxels/setup.py build_ext --inplace) and run ./data/supervoxels/generate_supervoxels.py
  2. Download pre-trained ResNet-50 weights deeplabv3 version and put your checkpoints folder, then replace the absolute path in the code ./models/encoder.py.
  3. Run ./script/train_<direction>.sh, for example: ./script/train_ct2mr.sh

🔍 Inference

  1. (Optional) You can download our pretrained models for different domains:

    After downloading, update the path accordingly in the test script.

  2. Run the following script to perform inference: ./script/test_<direction>.sh

  3. 🖼️ Prediction maps for the four cross-domain directions are available here — perfect for a quick glance!

🥰 Acknowledgements

Our code is built upon the works of SSL-ALPNet, ADNet and ViG, we appreciate the authors for their excellent contributions!

📝 Citation

If you use this code for your research or project, please consider citing our paper. Thanks!🥂:

@article{bo2025CGraph,
  title={Contrastive Graph Modeling for Cross-Domain Few-Shot Medical Image Segmentation}, 
  author={Yuntian Bo and Tao Zhou and Zechao Li and Haofeng Zhang and Ling Shao},
  year={2025},
  eprint={2512.21683},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
}

About

[IEEE T-MI 2026] Contrastive Graph Modeling for Cross-Domain Few-Shot Medical Image Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published