TransColor : Medical Image Colorization Based on Transformer with Content and Structure Preservation
*Authors: [Liming Xu](LimingXuM3 (Bryan Xu) (github.com)), Dengping Zhao, Bochuan Zheng, Weisheng Li and Xianhua Zeng
In this paper, we propose a transformer-based model to achieve the task of grey-scale medical image colorization based on real human slice images. Compared to the state-of-the-art methods, we can improve the coloring effect and make the synthetic image more realistic.
Compared with the state-of-the-art methods, we can improve the coloring effect, make the synthetic image more realistic and have better feature representation ability.
Flowchart of TransColor can be viewed as the following four steps: (1) Segment reference image and original image into patches, and generate patch sequences by linear projection, (2) feed original image sequence with CAPE and reference image sequence with SAPE into style Transformer encoder, respectively, (3) stylise content sequence according to style sequence in multi-layer Transformer decoder, and (4) obtain synthetic image with real physical colors using 3-layer CNN decoder.
- python 3.8
- pytorch 1.5.1
- PIL, numpy, scipy
- tqdm
Pretrained models: vgg-model, vit_embedding, decoder, Transformer_module
Please download them and put them into the floder ./experiments/
python test.py
Real human slice dataset is collected from color frozen section images from the US National Library of Medicine’s Visual Human Project (VHP)
grey-scale medical images are derived from Brain datasetThe Whole Brain Atlas (harvard.edu)
python train.py --batch_size 8
If you find our work useful in your research, please cite our paper using the following BibTeX entry ~ Thank you ^ . ^. Paper Link pdf

