Multi-scope Analysis Driven Hierarchical Graph Transformer for Whole Slide Image based Cancer Survival Prediction
Clone the repo:
git clone https://github.com/Baeksweety/HGTHGT && cd HGTHGT
Create a conda environment and activate it:
conda create -n env python=3.8
conda activate env
pip install -r requirements.txt
generate_superpixel.py shows how to generate merged superpixels of whole slide images and graph_construction.ipynb shows how to transform a histological image into the hierarchical graphs. After the data processing is completed, put all hierarchical graphs into a folder. The form is as follows:
PYG_Data
└── Dataset
├── pyg_data_1.pt
├── pyg_data_2.pt
:
└── pyg_data_n.pt
cluster.py shows how to generate the fixed number of clusters which woould be used in the train process. The form is as follows:
Cluster_Info
└── Dataset
├── cluster_info_1.pt
├── cluster_info_2.pt
:
└── cluster_info_n.pt
First, setting the data path, data splits and hyperparameters in the file train.py. Then, experiments can be run using the following command-line:
cd train
python train.py
or
bash run.sh
We provide a 5-fold checkpoint for each dataset, which performing as:
Dataset | CI |
---|---|
CRC | 0.607 |
TCGA_LIHC | 0.657 |
TCGA_KIRC | 0.646 |
- Our implementation refers the following publicly available codes.
- Pytorch Geometric--Fey M, Lenssen J E. Fast graph representation learning with PyTorch Geometric[J]. arXiv preprint arXiv:1903.02428, 2019.
- Histocartography--Jaume G, Pati P, Anklin V, et al. HistoCartography: A toolkit for graph analytics in digital pathology[C]//MICCAI Workshop on Computational Pathology. PMLR, 2021: 117-128.
- ViT Pytorch--Dosovitskiy A, Beyer L, Kolesnikov A, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[C]//International Conference on Learning Representations. 2020.
- NAGCN--Guan Y, Zhang J, Tian K, et al. Node-aligned graph convolutional network for whole-slide image representation and classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 18813-18823.