This is the official implementation code for Co-VeGAN: Complex-Valued Generative Adversarial Network for Compressive Sensing MR Image Reconstruction by Bhavya Vasudeva*, Puneesh Deora*, Saumik Bhattacharya, Pyari Mohan Pradhan (*equal contribution).
The code was written with Python 3.6.8 with the following dependencies:
- cuda release 9.0, V9.0.176
- tensorflow 1.12.0
- keras 2.2.4
- numpy 1.16.4
- scikit-image 0.15.0
- matplotlib 3.1.0
- nibabel 2.4.1
- cuDNN 7.4.1
This code has been tested in Ubuntu 16.04.6 LTS with 4 NVIDIA GeForce GTX 1080 Ti GPUs (each with 11 GB RAM).
- Downloading the dataset:
MICCAI 2013 dataset:
- The MICCAI 2013 grand challenge dataset can be downloaded from this webpage. It is required to fill a google form and register to be able to download the data.
- Download and save the
training-training
andtraining-testing
folders, which contain the training and testing data, respectively, into the repository folder.
MRNet dataset:
- The MRNet dataset can be downloaded from this webpage. It is required to register by filling the form at the end of the page to be able to download the data.
- Download and save the
train
andvalid
folders, which contain the training and testing data, respectively, into the repository folder.
fastMRI dataset:
- The fastMRI dataset is available on this webpage. It is required to fill the form at the end of the page to receive the download links via email to download the data.
- Download and save the
knee_singlecoil_train
folder, from which the training and testing data is created. Extract the files in a folder namedsinglecoil_train
within the repository folder.
- Run the following command to create the GT dataset:
python dataset_load.py
- Run the following command to create the undersampled dataset:
python usamp_data.py
- These files would create the training data using MICCAI 2013 dataset. The variables
dataset
andmode
can be changed in both the files to use MRNet or fastMRI datasets, or to create testing data. - The
masks
folder contains the undersampling masks used in this work. The path for the mask can be modified inusamp_data.py
, as required.
- Move the files in
complexnn
folder to the repository folder. - Run the following command to train the model, after checking the names of paths:
For real-vaued datasets (MICCAI 2013 and MRNet):
python train_model.py
For complex-vaued dataset (fastMRI):
python train_model_complex.py
- Run the following command to test the model, after checking the names of paths:
For real-vaued datasets (MICCAI 2013 and MRNet):
python test_model.py
For complex-vaued dataset (fastMRI):
python test_model_complex.py
- The pre-trained generator weights for various undersampling patterns are available at:
MICCAI 2013:
30% 1D-G • 30% Radial • 30% Spiral • 20% 1D-G • 10% 1D-G
fastMRI:30% 1D-G • 30% Radial • 30% Spiral • 20% 1D-G • 10% 1D-G
- Download the required weights in the repository folder.
- Run the following command, after changing the names of paths:
For MICCAI 2013 dataset:
python test_model.py
For fastMRI dataset:
python test_model_complex.py
If you find our research useful, please cite our work.
@InProceedings{Vasudeva_2022_WACV,
author = {Vasudeva, Bhavya and Deora, Puneesh and Bhattacharya, Saumik and Pradhan, Pyari Mohan},
title = {Compressed Sensing MRI Reconstruction With Co-VeGAN: Complex-Valued Generative Adversarial Network},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2022},
pages = {672-681}
}
Copyright 2020 Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.