This is the official code for our IEEE Transactions on Medical Imaging paper:
Boosting Convolution with Efficient MLP-Permutation for Volumetric Medical Image Segmentation
Yi Lin*, Xiao Fang*, Dong Zhang, Kwang-Ting Cheng, Hao Chen
The code is largely built on the COVID-19-20 challenge baseline.
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install monai==1.1.0
pip install 'monai[nibabel,ignite,tqdm]'
For more details, please check PyTorch installation guide and MONAI installation guide.
COVID-19-20: We provide the shell script below for training and testing. Then users can follow the instructions in COVID-19-20 challenge baseline to submit the predictions to challenge leaderboard.
sh run_net.sh
Synapse: We follow the data split of TransUNet and use nnUNet framework for data preprocessing, training and testing. During training, the nnUNet framework partitions the training data into five folds. We use four folds for training and one for validation. We provide the data split in the "DATA/Synapse" folder.
MSD BraTS: We follow VT-UNet for data preprocessing, training and testing.
LiTS17: We follow MedISeg framework for data preprocessing, training and testing.
Note: We provide the network configuration of all datasets in config.py
We thank MONAI, nnUNet, VT-UNet and MedISeg for the code we borrow from to conduct our experiments.
Please cite the paper if you use the code.
@ARTICLE{PHNet,
author={Lin, Yi and Fang, Xiao and Zhang, Dong and Cheng, Kwang-Ting and Chen, Hao},
journal={IEEE Transactions on Medical Imaging},
title={Boosting Convolution With Efficient MLP-Permutation for Volumetric Medical Image Segmentation},
year={2025},
volume={44},
number={5},
pages={2341-2352},
keywords={Transformers;Three-dimensional displays;Image segmentation;Convolutional neural networks;Feature extraction;Computer architecture;Computational efficiency;Decoding;Technological innovation;Synapses;Medical image segmentation;convolution neural network (CNN);multi-layer perceptron (MLP)},
doi={10.1109/TMI.2025.3530113}}