Skip to content

j-morano/multimodal-ssl-fpn

Repository files navigation

Multimodal self-supervised learning approach and network for 3D-to-2D tasks

This repository contains the source code of the following article:

  • J. Morano, G. Aresta, D. Lachinov, J. Mai, U. Schmidt-Erfurth and H. Bogunović, "Self-supervised learning via inter-modal reconstruction and feature projection networks for label-efficient 3D-to-2D segmentation," MICCAI 2023.

Paper (arXiv)Paper (Springer)PosterHow To Use

Inter-modal reconstruction approach

image

3D-to-2D network architecture

image

Our proposed 3D-to-2D segmentation network, FPN, is available in models/fusion_nets.py, along with the implementation of state-of-the-art networks Lachinov et al. (MICCAI 2021) and ReSensNet (Seeböck et al., Ophthalmology Retina, 2022).

Setting up the environment

The code should work with the latest versions of Python (3.11.4) and PyTorch (2.0.1) and with CUDA 11.7.

The recommended way to set up the environment is using a Python virtual environment.

To do so, you can run the following commands:

# Create Python environment
python3 -m venv venv

# Activate Python environment
source venv/bin/activate

# Install requirements
pip install -r requirements.txt

Setting up the original environment

All the experiments were originally run in a server with Python 3.6.8, PyTorch 1.10.2, and CUDA 11.3.

To install this version of Python, you can use pyenv, which can be easily installed using pyenv-installer.

Moreover, the original requirements are listed in original-requirements.txt.

So, to set up the original environment, you can run the following commands:

# Install pyenv
curl https://pyenv.run | bash

# Install Python 3.6.8
pyenv install -v 3.6.8

# Create Python environment
$PYENV_ROOT/versions/3.6.8/bin/python3 -m venv venv

# Activate Python environment
source venv/bin/activate

# Install requirements
pip install --upgrade pip
pip install -r original-requirements.txt

Run training

See run.sh.

Available options can be found in config.py.

Citation

If you find this repository useful in your research, please cite:

@misc{morano2023selfsupervised,
      author={Jos{\'{e}} Morano and Guilherme Aresta and Dmitrii Lachinov and Julia Mai and Ursula Schmidt-Erfurth and Hrvoje Bogunovi{\'{c}}},
      title={Self-supervised learning via inter-modal reconstruction and feature projection networks for label-efficient 3D-to-2D segmentation},
      publisher={arXiv},
      year={2023},
      doi={10.48550/arXiv.2307.03008}
}

Moreover, if you use any of the state-of-the-art networks, please cite the corresponding paper:

@misc{lachinov2021projective,
  author = {Dmitrii Lachinov and Philipp Seeb\"{o}ck and Julia Mai and Ursula Schmidt-Erfurth and Hrvoje Bogunovi{\'{c}}},
  title = {Projective Skip-Connections for Segmentation Along a Subset of Dimensions in Retinal OCT},
  publisher = {arXiv},
  year = {2021},
  doi = {10.48550/ARXIV.2108.00831}
}
@article{seebock2022linking,
  author = {Philipp Seeb\"{o}ck and Wolf-Dieter Vogl and Sebastian M. Waldstein and Jose Ignacio Orlando and Magdalena Baratsits and Thomas Alten and Mustafa Arikan and Georgios Mylonas and Hrvoje Bogunovi{\'{c}} and Ursula Schmidt-Erfurth},
  title = {Linking Function and Structure with {ReSensNet}},
  journal = {Ophthalmology Retina},
  doi = {10.1016/j.oret.2022.01.021},
  year = {2022},
  month = jun,
  publisher = {Elsevier {BV}},
  volume = {6},
  number = {6},
  pages = {501--511}
}

About

Repository of the paper "Self-supervised learning via inter-modal reconstruction and feature projection networks for label-efficient 3D-to-2D segmentation", presented at MICCAI 2023.

Topics

Resources

License

Stars

Watchers

Forks