Public PyTorch implementation for our paper Unified Brain MR-Ultrasound Synthesis using Multi-Modal Hierarchical Representations, which was accepted for presentation at MICCAI 2023.
If you find this code useful for your research, please cite the following paper:
@inproceedings{dorent2023unified,
title={Unified Brain MR-Ultrasound Synthesis Using Multi-modal Hierarchical Representations},
author={Dorent, Reuben and Haouchine, Nazim and Kogl, Fryderyk and Joutard, Samuel and Juvekar, Parikshit and Torio, Erickson and Golby, Alexandra J and Ourselin, Sebastien and Frisken, Sarah and Vercauteren, Tom and others},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={448--458},
year={2023},
organization={Springer}
}
We introduce MHVAE, a deep hierarchical variational auto-encoder (VAE) that synthesizes missing images from various modalities.
*Example of synthesis (first column: input; last column: target groundtruth image; other columns: synthetic images for different temperature.
The code is implemented in Python 3.6 using the PyTorch library. Requirements:
- Set up a virtual environment (e.g. conda or virtualenv) with Python >=3.6.9
- Install all requirements using:
pip install -r requirements.txt
The data and annotations are publicly available on TCIA.
train.py
is the main file for training the models.
inference.py
is the main file for running the inference:
If you want to use your own data, you just need to change the source and target paths, the splits and potentially the modality used.