Skip to content

Commit

Permalink
Merge pull request #44 from rundherum/fix-documentation
Browse files Browse the repository at this point in the history
Improve documentation
  • Loading branch information
fabianbalsiger committed Oct 12, 2021
2 parents 081dbe0 + dce2840 commit 446192d
Show file tree
Hide file tree
Showing 28 changed files with 240 additions and 222 deletions.
22 changes: 22 additions & 0 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py

# Build the docs in additional formats such as PDF
formats:
- epub
- htmlzip
- pdf

# Set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: docs/rtd-requirements.txt
- requirements: requirements.txt
58 changes: 58 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
pymia
=====

<a href="https://pymia.readthedocs.io/en/latest/?badge=latest">
<img src="https://readthedocs.org/projects/pymia/badge/?version=latest" alt="Documentation status">
</a>

pymia is an open-source Python (py) package for deep learning-based medical image analysis (mia).
The package addresses two main parts of deep learning pipelines: data handling and evaluation.
The package itself is independent of the deep learning framework used but can easily be integrated into TensorFlow and PyTorch pipelines.
Therefore, pymia is highly flexible, allows for fast prototyping, and reduces the burden of implementing data handling and evaluation.

Main Features
-------------
The main features of pymia are data handling ([`pymia.data` package](https://pymia.readthedocs.io/en/latest/pymia.data.html)) and evaluation ([`pymia.evaluation` package](https://pymia.readthedocs.io/en/latest/pymia.evaluation.html)).
The data package is used to extract data (images, labels, demography, etc.) from a dataset in the desired format (2-D, 3-D; full- or patch-wise) for feeding to a neural network.
The output of the neural network is then assembled back to the original format before extraction, if necessary.
The evaluation package provides both evaluation routines as well as metrics to assess predictions against references.
Evaluation can be used both for stand-alone result calculation and reporting, and for monitoring of the training progress.
Further, pymia provides some basic image filtering and manipulation functionality ([`pymia.filtering` package](https://pymia.readthedocs.io/en/latest/pymia.filtering.html)).
We recommend following our [examples](https://pymia.readthedocs.io/en/latest/examples.html).

The following figure depicts pymia in the deep learning environment. The data package allows to create a dataset from raw data.
Extraction of the data from this dataset is possible in nearly every desired format (2-D, 3-D; full- or patch-wise) for feeding to a neural network.
The prediction of the neural network can, if necessary, be assembled back to the format before extraction.
The evaluation package allows to evaluate predictions against references using a vast amount of metrics. It can be used stand-alone (solid) or for performance monitoring during training (dashed).

<img src="https://raw.githubusercontent.com/rundherum/pymia/master/docs/images/fig-overview.png" alt="The pymia package in the deep learning environment">

Getting Started
---------------

If you are new to pymia, here are a few guides to get you up to speed right away:

- [Installation](https://pymia.readthedocs.io/en/latest/installation.html) for installation instructions - or simply run `pip install pymia`
- [Examples](https://pymia.readthedocs.io/en/latest/examples.html) give you an overview of pymia's intended use. Jupyter notebooks and Python scripts are available in the directory [./examples](https://github.com/rundherum/pymia/tree/master/examples/).
- [Do you want to contribute?](https://pymia.readthedocs.io/en/latest/contribution.html)
- [Change history](https://pymia.readthedocs.io/en/latest/history.html)
- [Acknowledgments](https://pymia.readthedocs.io/en/latest/acknowledgment.html)

Citation
--------
If you use pymia for your research, please acknowledge it accordingly by citing our paper:

[Jungo, A., Scheidegger, O., Reyes, M., & Balsiger, F. (2021). pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis. Computer Methods and Programs in Biomedicine, 198, 105796.](https://doi.org/10.1016/j.cmpb.2020.105796)

BibTeX entry:

@article{Jungo2021a,
author = {Jungo, Alain and Scheidegger, Olivier and Reyes, Mauricio and Balsiger, Fabian},
doi = {10.1016/j.cmpb.2020.105796},
issn = {01692607},
journal = {Computer Methods and Programs in Biomedicine},
pages = {105796},
title = {{pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis}},
volume = {198},
year = {2021},
}
12 changes: 0 additions & 12 deletions README.rst

This file was deleted.

11 changes: 8 additions & 3 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,14 @@
exec(f.read(), about)

# -- Copy example Jupyter notebooks for documentation building
shutil.copyfile(os.path.join(basedir, 'examples', 'augmentation', 'basic.ipynb'),
os.path.join(basedir, 'docs', 'examples.augmentation.basic.ipynb'))

shutil.copyfile(os.path.join(basedir, 'examples', 'data', 'creation.ipynb'),
os.path.join(basedir, 'docs', 'examples.data.creation.ipynb'))

# examples.data.extraction_assembly.ipynb not copied as there exists a rst file

shutil.copyfile(os.path.join(basedir, 'examples', 'evaluation', 'basic.ipynb'),
os.path.join(basedir, 'docs', 'examples.evaluation.basic.ipynb'))

Expand All @@ -43,9 +48,6 @@
shutil.copyfile(os.path.join(basedir, 'examples', 'filtering', 'basic.ipynb'),
os.path.join(basedir, 'docs', 'examples.filtering.basic.ipynb'))

shutil.copyfile(os.path.join(basedir, 'examples', 'augmentation', 'basic.ipynb'),
os.path.join(basedir, 'docs', 'examples.augmentation.basic.ipynb'))

# -- General configuration ------------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
Expand Down Expand Up @@ -105,6 +107,9 @@
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']

# modules to be mocked
autodoc_mock_imports = ['tensorflow', 'torch']

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'default'

Expand Down
6 changes: 3 additions & 3 deletions docs/examples.data.extraction_assembly.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,14 @@ At the end of this example you find examples for the following additional use ca
* Extracting from a metadata dataset

.. tip::
This example is available as Jupyter notebook at `./examples/data/extraction_assembly.ipynb` and Python scripts for PyTorch and TensorFlow at at `./examples/data/extraction_assembly.py` and `./examples/data/extraction_assembly_tensorflow.py`, respectively.
This example is available as Jupyter notebook at `./examples/data/extraction_assembly.ipynb <https://github.com/rundherum/pymia/blob/master/examples/data/extraction_assembly.ipynb>`_ and Python scripts for PyTorch and TensorFlow at at `./examples/data/extraction_assembly.py <https://github.com/rundherum/pymia/blob/master/examples/data/extraction_assembly.py>`_ and `./examples/data/extraction_assembly_tensorflow.py <https://github.com/rundherum/pymia/blob/master/examples/data/extraction_assembly_tensorflow.py>`_, respectively.

The extraction of 3-D patches is available as Python script at `./examples/data/extraction_assembly_3dpatch.py`.
The extraction of 3-D patches is available as Python script at `./examples/data/extraction_assembly_3dpatch.py <https://github.com/rundherum/pymia/blob/master/examples/data/extraction_assembly_3dpatch.py>`_.

.. note::
To be able to run this example:

- Get the example data by executing `./examples/example-data/pull_example_data.py`.
- Get the example data by executing `./examples/example-data/pull_example_data.py <https://github.com/rundherum/pymia/blob/master/examples/example-data/pull_example_data.py>`_.


Code walkthrough
Expand Down
Binary file modified docs/images/fig-data-assembly.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/fig-data-creation.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/fig-data-extraction.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/fig-evaluation.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
24 changes: 13 additions & 11 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
.. include:: ../README.rst
pymia is an open-source Python (py) package for deep learning-based medical image analysis (mia).
The package addresses two main parts of deep learning pipelines: data handling and evaluation.
The package itself is independent of the deep learning framework used but can easily be integrated into TensorFlow and PyTorch pipelines.
Therefore, pymia is highly flexible, allows for fast prototyping, and reduces the burden of implementing data handling and evaluation.

Main Features
=============
Expand Down Expand Up @@ -46,24 +49,24 @@ If you are new to pymia, here are a few guides to get you up to speed right away

Citation
========
If you use pymia for your research, please acknowledge it accordingly by citing:
If you use pymia for your research, please acknowledge it accordingly by citing our paper:

.. code-block:: none
Jungo, A., Scheidegger, O., Reyes, M., & Balsiger, F. (2020). pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis. ArXiv preprint 2010.03639.
`Jungo, A., Scheidegger, O., Reyes, M., & Balsiger, F. (2021). pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis. Computer Methods and Programs in Biomedicine, 198, 105796 <https://doi.org/10.1016/j.cmpb.2020.105796>`_


BibTeX entry:

.. code-block:: none
@article{Jungo2020a,
archivePrefix = {arXiv},
arxivId = {2010.03639},
@article{Jungo2021a,
author = {Jungo, Alain and Scheidegger, Olivier and Reyes, Mauricio and Balsiger, Fabian},
journal = {arXiv preprint},
doi = {10.1016/j.cmpb.2020.105796},
issn = {01692607},
journal = {Computer Methods and Programs in Biomedicine},
pages = {105796},
title = {{pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis}},
year = {2020}
volume = {198},
year = {2021},
}
Expand All @@ -80,4 +83,3 @@ Indices and tables

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
4 changes: 1 addition & 3 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,6 @@ Run Sphinx in the pymia root directory to create the documentation:
- The documentation is now available under ``./docs/_build/index.html``

.. note::
To build the documentation including :mod:`pymia.data.backends`, the installation of PyTorch (:bash:`pip install torch`) and TensorFlow (:bash:`pip install tensorflow`) are required.

It might further be required to install `pandoc <https://pandoc.org/>`_.
To build the documentation, it might be required to install `pandoc <https://pandoc.org/>`_.

In case of the warning `WARNING: LaTeX command 'latex' cannot be run (needed for math display), check the imgmath_latex setting`, set the `imgmath_latex <http://www.sphinx-doc.org/en/master/usage/extensions/math.html#confval-imgmath_latex>`_ setting in the ``./docs/conf.py`` file.
6 changes: 3 additions & 3 deletions docs/pymia.data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The three main components of the data package are creation, extraction, and asse

**Creation**

The creation of a dataset is managed by the :class:`.Traverser` class, which processes the data of every subject (case) iteratively. It employs :class:`.Load` and :class:`.Callback` classes to load the raw data and write it to the dataset. :class:`.Transform` classes can be used to apply modifications to the data, e.g., an intensity normalization. For the ease of usage, the defaults :func:`.get_default_callbacks` and :class:`.LoadDefault` are implemented, which cover the most fundamental cases.
The creation of a dataset is managed by the :class:`.Traverser` class, which processes the data of every subject (case) iteratively. It employs :class:`.Load` and :class:`.Callback` classes to load the raw data and write it to the dataset. :class:`.Transform` classes can be used to apply modifications to the data, e.g., an intensity normalization. For the ease of usage, the defaults :func:`.get_default_callbacks` and :class:`.LoadDefault` are implemented, which cover the most fundamental cases. The code example :ref:`Creation of a dataset <example-data1>` illustrates how to create a dataset.

.. image:: ./images/fig-data-creation.png
:width: 200
Expand All @@ -23,7 +23,7 @@ The creation of a dataset is managed by the :class:`.Traverser` class, which pro

**Extraction**

Data extraction from the dataset is managed by the :class:`.PymiaDatasource` class, which provides a flexible interface for retrieving data, or chunks of data, to form training samples. An :class:`.IndexingStrategy` is used to define how the data is indexed, meaning accessing, for instance, an image slice or a 3-D patch of an 3-D image. :class:`.Extractor` classes extract the data from the dataset, and :class:`.Transform` classes can be used to alter the extracted data.
Data extraction from the dataset is managed by the :class:`.PymiaDatasource` class, which provides a flexible interface for retrieving data, or chunks of data, to form training samples. An :class:`.IndexingStrategy` is used to define how the data is indexed, meaning accessing, for instance, an image slice or a 3-D patch of an 3-D image. :class:`.Extractor` classes extract the data from the dataset, and :class:`.Transform` classes can be used to alter the extracted data. The code example :ref:`Data extraction and assembly <example-data2>` illustrates how to extract data.

.. image:: ./images/fig-data-extraction.png
:width: 200
Expand All @@ -32,7 +32,7 @@ Data extraction from the dataset is managed by the :class:`.PymiaDatasource` cla

**Assembly**

The :class:`.Assembler` class manages the assembly of the predicted neural network outputs by using the identical indexing that was employed to extract the data by the :class:`.PymiaDatasource` class.
The :class:`.Assembler` class manages the assembly of the predicted neural network outputs by using the identical indexing that was employed to extract the data by the :class:`.PymiaDatasource` class. The code example :ref:`Data extraction and assembly <example-data2>` illustrates how to assemble data.

.. image:: ./images/fig-data-assembly.png
:width: 200
Expand Down
5 changes: 5 additions & 0 deletions docs/rtd-requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# packages needed for building on readthedocs.io

sphinx >= 3.2.1
nbsphinx >= 0.7.1
sphinx-copybutton >= 0.2.12
10 changes: 0 additions & 10 deletions docs/rtfd-requirements.txt

This file was deleted.

0 comments on commit 446192d

Please sign in to comment.