Skip to content

hrlblab/ASIGN

Repository files navigation

ASIGN: Anatomy-aware Spatial Imputation Graphic Network for 3D Spatial Transcriptomics (CVPR 2025)

Arxiv Paper CVPR 2025

This project consists of the official implementation of ASIGN, which is an anatomy-aware spatial imputation graphic network for 3D Spatial Transcriptomics. Our paper has been accepted by CVPR 2025!

  1. Abstract
  2. Implementation
  3. Result

Abstract

Our proposed ASGIN framework is a novel framework for 3D-ST prediction that transforms isolated 2D spatial relationships into a cohesive 3D structure through inter-layer overlap and similarity, integrating 3D spatial information into 3D-ST imputation. Our contribution can be summarized as:

  • We introduce a new learning paradigm that shifts from 2D WSI-ST predictions to partially known 3D volumetric WSI-ST imputation.
  • A new multi-level spatial attention graphic network is proposed to facilitate comprehensive feature integration across different layers, neighboring regions, and multiple resolutions, thus enabling precise sample-level predictions.

Figure_1_problem_definition

Implementation

1. Data preparation

  • Data preparation includes two parts.
    • Data construction for 2D patches
    • 3D data pre-processing

Figure_3_network_structure

1.1 Preparation for 2D multi-resolution patches

Code for preparation for 2D multi-resolution patches consists in folder data_preprocessing. Running these python files following the annotated sequence.

Note that different datasets may have different format in their raw data, please modify the code when applying the code for other datasets.

  1. Patch cropping for region/global level patches.
python 1_cropped_multi_level_patch.py
  1. Find the pair of spot and region/global level patches, and filter the region/global level patches.
python 2_find_512_pair.py
python 3_find_1024_pair.py
  1. Extract feature from patches of different resolutions
python 4_feature_extraction.py
  1. Gene expression preprocessing, we follow the normalization method proposed by ST-Net.
python 5_1_find_high_expression.py
python 5_2_get_label_of_other_resolution.py
python 5_3_normalization.py
  1. Graph construction for region/global level
python 6_pt_construction.py
  1. Final step for 2D level preprocessing, get the 2D data information
python 7_make_dataset.py

In this link, we provide you with a set of sample file formats generated during the preprocessing process for you to check whether the preprocessing phase is correct


1.2 3D Registration for samples

Code for preparation for 3D registration for samples consists in folder registration. Running these python files following the annotated sequence.

Note: please modify the image folder path for your data.

  • Download ANTs from:

    https://github.com/ANTsX/ANTs
    
  • Run the python files following the steps to get overlap information

    python Step1_affine_registration_2d_PAS_2048_xfeat.py
    ...
    python Step3.5_registertoMiddle_ANTs_points_contour_fast.py

1.3 Preparation for 3D sample-level dataset

Code for preparation for 3D sample-level dataset consists in folder data_preprocessing_3d. Running these python files following the annotated sequence.

  1. Calculate IoU and similarity of spots cross layers

    python 1_get_iou.py
    python 2_get_similarity.py
    
  2. Build up edge weight between spots

    python 3_build_3d_information.py
    
  3. Combine sample-level graph and information

    python 4_build_3d_graph.py
    python 5_build_3d_final_format.py
    

2 Pre-processed public datasets

For your convenience, we have released two processed public datasets HER2 and ST-data for you to reproduce our results. The related data source are listed as follows:


3 Training and Evaluation

MSAGNet comprises cross-attention layers, GAT blocks, and Transformer layers to integrate and aggregate features across multiple resolution levels, 3D sample levels, and patch-self levels, respectively.

Note: Since different datasets have different name format, the path of some files used in dataloader_3d.py and main_3d.py need to be changed for your files. Also, to achieve best performance in your dataset, please finetune the hyperparameters used indataloader_3d.py and main_3d.py.

Training Implementation

  • To train ASIGN on 2D-level prediction, run main.py:

    CUDA_VISIBLE_DEVICES=0 python main.py --root_path '' --base_lr 0.001 --batch_size 128
  • To train ASIGN on 3D-level prediction, run main_3d.py:

    CUDA_VISIBLE_DEVICES=0 python main_3d.py --root_path '' --base_lr 0.001 --batch_size 128

Result

Experimental results demonstrate ASIGN's superior performance and robust generalization in cross-sample validation across multiple public datasets, significantly improving the PCC metric for gene expression tasks from approximately 0.5 with existing methods to around 0.7. ASIGN offers a promising pathway to achieve accurate and cost-effective 3D ST data for real-world clinical applications.

Figure_table_result


Citation

If you find this project useful for your research, please use the following BibTeX entry.

@inproceedings{zhu2025asign,
  title={ASIGN: an anatomy-aware spatial imputation graphic network for 3D spatial transcriptomics},
  author={Zhu, Junchao and Deng, Ruining and Yao, Tianyuan and Xiong, Juming and Qu, Chongyu and Guo, Junlin and Lu, Siqi and Yin, Mengmeng and Wang, Yu and Zhao, Shilin and others},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={30829--30838},
  year={2025}
}

About

Anatomy-aware Spatial Imputation Graphic Network for 3D Spatial Transcriptomics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages