Skip to content

Official implementation of "Universal Incomplete-View CT Reconstruction with Prompted Contextual Transformer"

License

Notifications You must be signed in to change notification settings

Masaaki-75/proct

Repository files navigation

Prompted Contextual Transformer for Incomplete-View CT Reconstruction

This repository contains the official implementation of the paper: "Prompted Contextual Transformer for Incomplete-View CT Reconstruction"

TL;DR

We build a robust and transferable network (named ProCT) that can reconstruct degraded CT images from a vast range of incomplete-view CT settings within a single model in one pass, by leveraging multi-setting synergy for training.

🚧We are currently cleaning and reformatting the code. Please stay tuned!🚧

Abstract

Promising computed tomography (CT) techniques for sparse-view and limited-angle scenarios can reduce the radiation dose, shorten the data acquisition time, and allow irregular and flexible scanning. Yet, these two scenarios involve multiple different settings that vary in view numbers or angular ranges, ultimately introducing complex artifacts to the reconstructed images. Existing CT reconstruction methods tackle these scenarios and/or settings in isolation, omitting their synergistic effects on each other for better robustness and transferability in clinical practice. In this paper, we frame these diverse settings as a unified incomplete-view CT problem, and propose a novel Prompted Contextual Transformer (ProCT) to harness the multi-setting synergy from these incomplete-view CT settings, thereby achieving more robust and transferable CT reconstruction. The novelties of ProCT lie in two folds. First, we devise projection view-aware prompting to provide setting-discriminative information, enabling a single ProCT to handle diverse settings. Second, we propose artifact-aware contextual learning to sense artifact pattern knowledge from in-context image pairs, making ProCT capable of accurately removing the complex, unseen artifacts. Extensive experimental results on two public clinical CT datasets demonstrate (i) superior performance of ProCT over state-of-the-art methods---including single-setting models---on a wide range of settings, (ii) strong transferability to unseen datasets and scenarios, and (iii) improved performance when integrating sinogram data.

Updates

  • training code.
  • demo.
  • pretrained model.
  • inference code.
  • architecture code.
  • 2023/12/13. Initial commit.

Environment Preparation

We build our model based on torch-radon toolbox that provides highly-efficient and differentiable tomography transformations. There are official V1 repository and an unofficial but better-maintained V2 repository. V1 works for older pytorch/CUDA (torch<= 1.7., CUDA<=11.3), while V2 supports newer versions. Below is a walkthrough for installing this toolbox.

Installing Torch-Radon V1

  • Step 1: build a new environment
conda create -n tr37 python==3.7
conda activate tr37
  • Step 2: set up basic pytorch
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

or

pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
python setup.py install
  • Step 4: install other related packages
conda install -c astra-toolbox astra-toolbox
conda install matplotlib
pip install einops
pip install opencv-python
conda install pillow
conda install scikit-image
conda install scipy==1.6.0
conda install wandb
conda install tqdm

Installing Torch-Radon V2

  • Step 1: build a new environment
conda create -n tr39 python==3.9
conda activate tr39
  • Step 2: set up basic pytorch
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
python setup.py install
  • Step 4: install other related packages (same as above)

Dataset Preparation

We use DeepLesion dataset and AAPM Mayo 2016 dataset in our experiments. DeepLesion dataset can be downloaded from here, and the AAPM dataset can be downloaded from Clinical Innovation Center (or the box link).

When finishing downloading these two datasets, please arrange the DeepLesion dataset as follows:

__path/to/your/deeplesion/data
  |__000001_01_01
  |  |__103.png
  |  |__104.png
  |  |__...
  |
  |__000001_02_01
  |  |__008.png
  |  |__009.png
  |  |__...
  |
  |__...

and arrange the AAPM dataset as follows:

__path/to/your/aapm/data
  |__L067_FD_1_1.CT.0001.0001.2015.12.22.18.09.40.840353.358074219.npy
  |__L067_FD_1_1.CT.0001.0002.2015.12.22.18.09.40.840353.358074243.npy
  |__...

Finally, replace the global variables (DEEPL_DIR and AAPM_DIR) in datasets/lowlevel_ct_dataset.py with your own path/to/your/deeplesion/data and path/to/your/aapm/data!

Demo

Once the environments and datasets are ready, you can check the basic forwarding process of ProCT in ./demo.ipynb. The checkpoint file is provided in the Releases page.

UPDATE. Considering that some users have trouble installing torch-radon package, we update some pre-computed in-context pairs in ./samples as well as a simpler demo in ./demo_easy.ipynb, where the code does not require torch-radon at all!

Training and Inference

Once the environments and datasets are ready, you can train/test ProCT using scripts in train.sh and test.sh.

Acknowledgement

Big thanks to their great work for insights and open-sourcing!

Citation

If you find our work and code helpful, please kindly cite our paper :)

@article{ma2023proct,
  title={Prompted Contextual Transformer for Incomplete-View CT Reconstruction},
  author={Ma, Chenglong, and Li, Zilong and He, Junjun and Zhang, Junping and Zhang, Yi and Shan, Hongming},
  journal={arXiv preprint arXiv:2312.07846},
  year={2023}
}

About

Official implementation of "Universal Incomplete-View CT Reconstruction with Prompted Contextual Transformer"

Resources

License

Stars

Watchers

Forks

Packages

No packages published