Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving documentation #477

Merged
merged 29 commits into from
Oct 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
a82d907
modifying model section
lrouhier Oct 15, 2020
038b7b4
adding data under usage (ref in doc needed to be modified)
lrouhier Oct 15, 2020
622d72a
removing data file and changing order (config file before usage)
lrouhier Oct 15, 2020
79bead2
missing space
lrouhier Oct 15, 2020
5949504
visual error in api ref
lrouhier Oct 15, 2020
8965c70
testing with check mark and cross
lrouhier Oct 15, 2020
d8b285d
update table and center
lrouhier Oct 15, 2020
036cc71
change cross
lrouhier Oct 16, 2020
1dca897
Merge branch 'master' into lr/fixing_documentation
lrouhier Oct 16, 2020
c8ca606
Merge branch 'lr/fixing_documentation' of https://github.com/neuropol…
lrouhier Oct 16, 2020
5c82399
fixing visual
lrouhier Oct 16, 2020
f9110bb
missing blank line (triggered warning)
lrouhier Oct 16, 2020
6330738
changing paper for article to avoid hyperlink issue (trigerred warning)
lrouhier Oct 16, 2020
0dec461
adding extra line
lrouhier Oct 16, 2020
9587143
update file after review
lrouhier Oct 16, 2020
3581b3d
Added BIDS logo, clarified Data section
jcohenadad Oct 16, 2020
966a626
Moved example json files up
jcohenadad Oct 16, 2020
fc59841
Fixed broken links for example config files
jcohenadad Oct 16, 2020
2a5493b
Moved data to its own main section
jcohenadad Oct 16, 2020
b9b553d
Moved architectures and pre-trained models to main sections
jcohenadad Oct 16, 2020
9eb7885
replace models.rst by architectures.rst
lrouhier Oct 16, 2020
7cc14e1
after name change
lrouhier Oct 16, 2020
c6f08dc
modified some elements
lrouhier Oct 16, 2020
b3ed541
Fixed broken link
jcohenadad Oct 16, 2020
7a90ac7
typo
lrouhier Oct 16, 2020
4d3358f
Update usage.rst
jcohenadad Oct 16, 2020
9944498
fixing link
lrouhier Oct 16, 2020
2f563b1
fixing license link
lrouhier Oct 16, 2020
89b6da1
Changed abs link to relative link
jcohenadad Oct 16, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
23 changes: 13 additions & 10 deletions docs/source/models.rst → docs/source/architectures.rst
Original file line number Diff line number Diff line change
@@ -1,44 +1,47 @@
Models
======
.. _architectures:

Architectures
=============

The following architectures are availabled in ``ivadomed``.

:mod:`ResNet`
---------------------------
^^^^^^^^^^^^^

.. autoclass:: ivadomed.models.ResNet


:mod:`DenseNet`
---------------------------
^^^^^^^^^^^^^^^

.. autoclass:: ivadomed.models.DenseNet


:mod:`Unet`
---------------------------
^^^^^^^^^^^

.. autoclass:: ivadomed.models.Unet


:mod:`FiLMedUnet`
---------------------------
^^^^^^^^^^^^^^^^^

.. autoclass:: ivadomed.models.FiLMedUnet


:mod:`HeMISUnet`
---------------------------
^^^^^^^^^^^^^^^^

.. autoclass:: ivadomed.models.HeMISUnet


:mod:`UNet3D`
---------------------------
^^^^^^^^^^^^^

.. autoclass:: ivadomed.models.UNet3D


:mod:`Countception`
---------------------------
^^^^^^^^^^^^^^^^^^^

.. autoclass:: ivadomed.models.Countception

24 changes: 14 additions & 10 deletions docs/source/comparison_other_projects_table.csv
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
**Name**,**Repository**,**BIDS**,**DL base library**,"**Task (segmentation, detection, classification)**",**Data dimension**,**Multichannel / Multilabel**,**Uncertainty**,**Transfer Learning**,**Pre-processing tools**,**Post-processing tools**,**User case examples**,**Multi-GPU data parallelism**,**Automatic Model evaluation**,**Input region of interest**,**Missing modality**,**Model performance comparison**,**Automatic hyperparameter optimisation**,**Packaged Multi-center Model**
**ivadomed**,https://github.com/neuropoly/ivado-medical-imaging,Yes,PyTorch,"Classification, Segmentation, Detection","2D, 3D",Both,**Epistemic Aleatoric**,Yes,Yes,Yes,Yes,No,Yes,Yes (image),**Yes**,**Yes**,Yes,Yes
monai,https://github.com/Project-MONAI/MONAI,No,PyTorch,"Segmentation, Classification","2D,3D",Both,None,No,Yes,No,Yes,Yes,Yes,Yes (coordinates),No,No,No,No
delira,https://github.com/justusschock/delira,No,PyTorch and TensorFlow,"Classification, Generation, Segmentation","2D,3D",None,None,Yes,No,No,Yes,Yes,No,No,No,No,No,No
MIC-DKFZ,https://github.com/MIC-DKFZ/medicaldetectiontoolkit,No,Torch,Detection,"2D,3D",None,None,No,No,No,No,No,Yes,Yes (image),No,No,No,No
ANTsPyNet,https://github.com/ANTsX/ANTsPyNet,No,Kears (backend TF),"Classification, segmentation,clustering, GAN, registration, super-resolution, autoencoder","2D,3D",Multilabel,None,No,No,No,Yes ,No,No,No,No,No,No,No
DLTK,https://github.com/DLTK/DLTK,No,Tensorflow,"Classification, segmentation, GAN, registration, super-resolution, autoencoder",3D,Multilabel,None,Yes,No,No,Yes,No,No,No,No,No,No,No
MIScnn,https://github.com/frankkramer-lab/MIScnn,No,Tensorflow/Keras,Segmentation,"2D,3D",Multilabel,None,No,Yes,Yes,No,Yes,Yes,No,No,No,No,No
niftytorch,https://niftytorch.github.io/doc/,Yes,Torch,Classificication /segmentation ,3D,None,None,No,No,No,Yes,Yes,No,Yes,No,No,Yes,No
DeepNeuro,https://github.com/QTIM-Lab/DeepNeuro,No,Tensorflow/Keras,Segmentation,2D/3D,None,None,No,Yes,Yes,Yes,No,No,No,No,No,No,Yes
**Name**,**Repository**,**BIDS**,**DL base library**,"**Task (segmentation, detection, classification)**",**Data dimension**,**Multichannel**,**Multilabel**,**Uncertainty**,**Transfer Learning**,**Pre-processing tools**,**Post-processing tools**,**User case examples**,**Multi-GPU data parallelism**,**Automatic Model evaluation**,**Input region of interest**,**Missing modality**,**Model performance comparison**,**Automatic hyperparameter optimisation**,**Packaged Multi-center Model**
**ivadomed**,https://github.com/ivadomed/ivadomed,|yes|,PyTorch,"Classification, Segmentation, Detection","2D, 3D",|yes| ,|yes| ,|yes|,|yes|,|yes|,|yes|,|yes|,|no|,|yes|,|yes| ,|yes|,|yes|,|yes|,|yes|
monai,https://github.com/Project-MONAI/MONAI,|no|,PyTorch,"Segmentation, Classification","2D, 3D",|yes| ,"


|yes|
",|no| ,|no|,|yes|,|no|,|yes|,|yes|,|yes|,|yes| ,|no|,|no|,|no|,|no|
delira,https://github.com/justusschock/delira,|no|,PyTorch and TensorFlow,"Classification, Generation, Segmentation","2D, 3D",|no| ,|no| ,|no|,|yes|,|no|,|no|,|yes|,|yes|,|no|,|no|,|no|,|no|,|no|,|no|
MIC-DKFZ,https://github.com/MIC-DKFZ/medicaldetectiontoolkit,|no|,PyTorch,Detection,"2D, 3D",|no| ,|no| ,|no|,|no|,|no|,|no|,|no|,|no|,|yes|,|yes|,|no|,|no|,|no|,|no|
ANTsPyNet,https://github.com/ANTsX/ANTsPyNet,|no|,Tensorflow/Keras,"Classification, Segmentation, Clustering, GAN, Registration, Super-resolution, Autoencoder","2D, 3D",|no| ,|yes| ,|no|,|no|,|no|,|no|,|yes| ,|no|,|no|,|no|,|no|,|no|,|no|,|no|
DLTK,https://github.com/DLTK/DLTK,|no|,Tensorflow,"Classification, Segmentation, GAN, Registration, Super-resolution, Autoencoder",3D,|no| ,|yes| ,|no| ,|yes|,|no|,|no|,|yes|,|no|,|no|,|no|,|no|,|no|,|no|,|no|
MIScnn,https://github.com/frankkramer-lab/MIScnn,|no|,Tensorflow/Keras,Segmentation,"2D, 3D",|no| ,|yes| ,|no| ,|no|,|yes|,|yes|,|no|,|yes|,|yes|,|no|,|no|,|no|,|no|,|no|
niftytorch,https://niftytorch.github.io/doc/,|yes|,PyTorch,"Classification, Segmentation ",3D,|no| ,|no| ,|no| ,|no|,|no|,|no|,|yes|,|yes|,|no|,|yes|,|no|,|no|,|yes|,|no|
DeepNeuro,https://github.com/QTIM-Lab/DeepNeuro,|no|,Tensorflow/Keras,Segmentation,"2D, 3D",|no| ,|no|,|no|,|no|,|yes|,|yes|,|yes|,|no|,|no|,|no|,|no|,|no|,|no|,|yes|
69 changes: 35 additions & 34 deletions docs/source/configuration_file.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,23 @@
Configuration File
==================

All parameters used for loading data, training and predicting are contained
within a single JSON configuration file. This section describes how to set up
this configuration file.

For convenience, here is an generic configuration file: `config\_config.json <https://raw.githubusercontent.com/ivadomed/ivadomed/master/ivadomed/config/config.json>`__.

Below are other, more specific configuration files:

- `config\_classification.json <https://raw.githubusercontent.com/ivadomed/ivadomed/master/ivadomed/config/config_classification.json>`__: Trains a classification model.

- `config\_sctTesting.json <https://raw.githubusercontent.com/ivadomed/ivadomed/master/ivadomed/config/config_sctTesting.json>`__: Trains a 2D segmentation task with the U-Net architecture.

- `config\_spineGeHemis.json <https://raw.githubusercontent.com/ivadomed/ivadomed/master/ivadomed/config/config_spineGeHemis.json>`__: Trains a segmentation task with the HeMIS-UNet architecture.

- `config\_tumorSeg.json <https://raw.githubusercontent.com/ivadomed/ivadomed/master/ivadomed/config/config_tumorSeg.json>`__: Trains a segmentation task with a 3D U-Net architecture.


General parameters
------------------

Expand Down Expand Up @@ -95,20 +112,20 @@ slice\_filter
^^^^^^^^^^^^^

Dict. Discard a slice from the dataset if it meets a condition, see
below.
below.

- ``filter_empty_input``: Bool. Discard slices where all voxel
intensities are zeros.
intensities are zeros.
- ``filter_empty_mask``: Bool. Discard slices
where all voxel labels are zeros.

roi
^^^

Dict. of parameters about the region of interest
Dict. of parameters about the region of interest

- ``suffix``: String. Suffix of the derivative file containing the ROI used to crop (e.g. ``"_seg-manual"``) with ``ROICrop`` as transform. Please use ``null`` if
you do not want to use an ROI to crop.
you do not want to use an ROI to crop.
- ``slice_filter_roi``: int. If the ROI mask contains less than ``slice_filter_roi`` non-zero voxels,
the slice will be discarded from the dataset. This feature helps with
noisy labels, e.g., if a slice contains only 2-3 labeled voxels, we do
Expand Down Expand Up @@ -254,9 +271,9 @@ Architecture
------------

Architectures for both segmentation and classification are available and
described in the :ref:`models:Models` section. If the selected
described in the :ref:`architectures` section. If the selected
architecture is listed in the
`loader <ivadomed/loader/loader.py>`__ file, a
`loader <https://github.com/ivadomed/ivadomed/blob/lr/fixing_documentation/ivadomed/loader/loader.py>`__ file, a
classification (not segmentation) task is run. In the case of a
classification task, the ground truth will correspond to a single label
value extracted from ``target``, instead being an array (the latter
Expand All @@ -265,14 +282,13 @@ being used for the segmentation task).
default\_model (Mandatory)
^^^^^^^^^^^^^^^^^^^^^^^^^^

Dict. Define the default model (``Unet``) and mandatory parameters that
are common to all available architectures (listed in the
:ref:`models:Models` section). For more specific models (see below),
Dictionary. Define the default model (``Unet``) and mandatory parameters that
are common to all available :ref:`architectures`. For custom architectures (see below),
the default parameters are merged with the parameters that are specific
to the tailored model.
to the tailored architecture.

- ``name``: ``Unet`` (default)
- ``dropout_rate``: Float (e.g. 0.4).
- ``name``: ``Unet`` (default)
- ``dropout_rate``: Float (e.g. 0.4).
- ``batch_norm_momentum``: Float (e.g. 0.1).
- ``depth``: Strictly positive integer. Number of down-sampling operations. - ``relu`` (optional): Bool. Sets final activation to normalized ReLU (relu between 0 and 1).

Expand Down Expand Up @@ -325,8 +341,8 @@ uncertainty
Uncertainty computation is performed if ``n_it>0`` and at least
``epistemic`` or ``aleatoric`` is ``true``. Note: both ``epistemic`` and
``aleatoric`` can be ``true``.
- ``epistemic``: Bool. Model-based uncertainty with `Monte Carlo Dropout <https://arxiv.org/abs/1506.02142>`__.

- ``epistemic``: Bool. Model-based uncertainty with `Monte Carlo Dropout <https://arxiv.org/abs/1506.02142>`__.
- ``aleatoric``: Bool. Image-based uncertainty with `test-time augmentation <https://doi.org/10.1016/j.neucom.2019.01.103>`__.
- ``n_it``: Integer. Number of Monte Carlo iterations. Set to 0 for no
uncertainty computation.
Expand All @@ -337,10 +353,10 @@ Cascaded Architecture Features
object\_detection\_params (Optional)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

- ``object_detection_path``: String. Path to object detection model and
the configuration file. The folder, configuration file, and model need
to have the same name (e.g. ``findcord_tumor/``,
``findcord_tumor/findcord_tumor.json``, and
- ``object_detection_path``: String. Path to object detection model and
the configuration file. The folder, configuration file, and model need
to have the same name (e.g. ``findcord_tumor/``,
``findcord_tumor/findcord_tumor.json``, and
``findcord_tumor/findcord_tumor.onnx``, respectively).
The model's prediction will be used to generate bounding boxes.
- ``safety_factor``: List. List of length 3 containing the factors to
Expand Down Expand Up @@ -386,20 +402,5 @@ Available transformations:
``0`` to disable.
- ``HistogramClipping`` (parameters: ``min_percentile``,
``max_percentile``)
- ``Clage`` (parameters: ``clip_limit``, ``kernel_size``)
- ``Clahe`` (parameters: ``clip_limit``, ``kernel_size``)
- ``RandomReverse``

Examples
--------

Examples of configuration files: `config\_config.json <ivadomed/config/config.json>`__.

In particular:

- `config\_classification.json <ivadomed/config/config_classification.json>`__. Is dedicated to classification task.

- `config\_sctTesting.json <ivadomed/config/config_sctTesting.json>`__. Is a user case of 2D segmentation using a U-Net model.

- `config\_spineGeHemis.json <ivadomed/config/config_spineGeHemis.json>`__. Shows how to use the HeMIS-UNet.

- `config\_tumorSeg.json <ivadomed/config/config_tumorSeg.json>`__. Runs a 3D segmentation using a 3D UNet.
16 changes: 8 additions & 8 deletions docs/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Contributing to ivadomed
Introduction
------------

First off, thanks for taking the time to contribute! 🎉
First off, thanks for taking the time to contribute! 🎉

When contributing to this repository, please first discuss the change
you wish to make by opening a new `Github
Expand Down Expand Up @@ -167,8 +167,8 @@ Licensing

Ensure that you are the original author of your changes, and if that is
not the case, ensure that the borrowed/adapted code is compatible with
the `project's
license <https://ivadomed.org/en/latest/index.html#license>`__.
the project's
:ref:`license`.

Committing
----------
Expand Down Expand Up @@ -234,15 +234,15 @@ If the PR fixes issue(s), indicate it after your introduction:
``Fixes #XXXX, Fixes #YYYY``. Note: it is important to respect the
syntax above so that the issue(s) will be closed upon merging the PR.

Work in progress
Work in progress
~~~~~~~~~~~~~~~~

If your PR is not ready for review yet, you can convert it to a "Draft", so the team is informed.

A draft pull request is styled differently to clearly indicate that it’s in a draft state.
Merging is blocked in draft pull requests. Change the status to “Ready for review” near the
bottom of your pull request to remove the draft state and allow merging according to your
project’s settings.
A draft pull request is styled differently to clearly indicate that it’s in a draft state.
Merging is blocked in draft pull requests. Change the status to “Ready for review” near the
bottom of your pull request to remove the draft state and allow merging according to your
project’s settings.

Continuous Integration
~~~~~~~~~~~~~~~~~~~~~~
Expand Down
12 changes: 6 additions & 6 deletions docs/source/data.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
Data
====

Without data, nothing can be done. To get you started, we recommend you
download the `Example data for Ivadomed <https://github.com/ivadomed/data_example_spinegeneric/releases/tag/r20200825>`__. This dataset is composed of 10 subjects from different imaging centers and includes
original images in NIfTI format as well as manual segmentations and
labels. The data are organized according to the
`BIDS <http://bids.neuroimaging.io/>`__ convention, to be fully
compatible with ``ivadomed`` loader:
To facilitate the organization of data, ``ivadomed`` requires the data to be
organized according to the `Brain Imaging Data Structure (BIDS) <http://bids.neuroimaging.io/>`__ convention.
An example of this organization is shown below:

.. image:: ../../images/1920px-BIDS_Logo.png
:alt: BIDS_Logo

::

Expand Down
7 changes: 4 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,11 @@ Home
:caption: Getting started

installation.rst
usage.rst
configuration_file.rst
data.rst
models.rst
configuration_file.rst
usage.rst
architectures.rst
pretrained_models.rst
scripts.rst

.. _tutorials:
Expand Down
2 changes: 2 additions & 0 deletions docs/source/license.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _license:

License
=======

Expand Down
12 changes: 12 additions & 0 deletions docs/source/pretrained_models.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Pre-trained models
==================

For convenience, the following pre-trained models are ready-to-use:

- `t2-tumor <https://github.com/ivadomed/t2_tumor/archive/r20200621.zip>`_: Cord tumor segmentation model, trained on T2-weighted contrast.
- `t2star_sc <https://github.com/ivadomed/t2star_sc/archive/r20200622.zip>`_: Spinal cord segmentation model, trained on T2-star contrast.
- `mice_uqueensland_gm <https://github.com/ivadomed/mice_uqueensland_gm/archive/r20200622.zip>`_: Gray matter segmentation model on mouse MRI. Data from University of Queensland.
- `mice_uqueensland_sc <https://github.com/ivadomed/mice_uqueensland_sc/archive/r20200622.zip>`_: Cord segmentation model on mouse MRI. Data from University of Queensland.
- `findcord_tumor <https://github.com/ivadomed/findcord_tumor/archive/r20200621.zip>`_: Cord localisation model, trained on T2-weighted images with tumor.
- `model_find_disc_t1 <https://github.com/ivadomed/model_find_disc_t1/archive/r20201013.zip>`_: Intervertebral disc detection model trained on T1-weighted images.
- `model_find_disc_t2 <https://github.com/ivadomed/model_find_disc_t2/archive/r20200928.zip>`_: Intervertebral disc detection model trained on T2-weighted images.
13 changes: 12 additions & 1 deletion docs/source/purpose.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,22 @@
.. |yes| raw:: html

<style> .line {text-align:centers;} </style>
<p style="color:green" align="center"> &#10004;</p>

.. |no| raw:: html

<style> .line {text-align:centers;} </style>
<p style="color:red" align="center"> &#10007;</p>


Purpose
=======

The purpose of the ``ivadomed`` project is to:

* Provide researchers with an open-source framework for training deep learning models for applications in medical imaging;

* Provide ready-to-use :doc:`models` trained on multi-center data.
* Provide ready-to-use :doc:`pretrained_models` trained on multi-center data.

Comparison with other projects
------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/one_class_segmentation_2d_unet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ segmentation training.

"log_directory":"spineGeneric"

- ``loader_parameters:bids_path``: Location of the dataset. As discussed in :doc:`../data`, the dataset
- ``loader_parameters:bids_path``: Location of the dataset. As discussed in `Data <../data.html>`__, the dataset
should conform to the BIDS standard. Modify the path so it points to the location of the downloaded dataset.

.. code-block:: xml
Expand Down
7 changes: 5 additions & 2 deletions docs/source/usage.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
Usage
=====

Command line tools
------------------

New model can be generated using the command-line tool from the
terminal:

Expand All @@ -11,5 +14,5 @@ terminal:
where ``config.json`` is a configuration file, which parameters are
described in the :ref:`configuration_file:Configuration File`.

To fully benefit from all the features of ``ivadomed``, please see the
:ref:`tutorials<tutorials>`.
To fully benefit from all the features of ``ivadomed``, please see section
``TUTORIALS``.
Binary file added images/1920px-BIDS_Logo.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion ivadomed/losses.py
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@ class AdapWingLoss(nn.Module):
Adaptive Wing loss
Used for heatmap ground truth.

..seealso::
.. seealso::
Wang, Xinyao, Liefeng Bo, and Li Fuxin. "Adaptive wing loss for robust face alignment via heatmap regression."
Proceedings of the IEEE International Conference on Computer Vision. 2019.

Expand Down
13 changes: 8 additions & 5 deletions ivadomed/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,13 +66,16 @@ def run_command(context, n_gif=0, thr_increment=None, resume_training=False):
thr_increment (float): A threshold analysis is performed at the end of the training using the trained model and
the training + validation sub-dataset to find the optimal binarization threshold. The specified value
indicates the increment between 0 and 1 used during the ROC analysis (e.g. 0.1).
resume_training (bool): Load a saved model ("checkpoint.pth.tar" in the log_directory) for resume
training. This training state is saved everytime a new best model is saved in the log
directory.
resume_training (bool): Load a saved model ("checkpoint.pth.tar" in the log_directory) for resume training.
This training state is saved everytime a new best model is saved in the log
directory.

Returns:
Float or pandas Dataframe:
If "train" command: Returns floats: best loss score for both training and validation.
If "test" command: Returns a pandas Dataframe: of metrics computed for each subject of the testing sub dataset
and return the prediction metrics before evaluation.

If "test" command: Returns a pandas Dataframe: of metrics computed for each subject of the testing
sub dataset and return the prediction metrics before evaluation.
"""
command = copy.deepcopy(context["command"])
log_directory = copy.deepcopy(context["log_directory"])
Expand Down
2 changes: 1 addition & 1 deletion ivadomed/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ class DenseNet(nn.Module):
drop_rate (float) - dropout rate after each dense layer
num_classes (int) - number of classification classes
memory_efficient (bool) - If True, uses checkpointing. Much more memory efficient,
but slower. Default: *False*. See `"paper" <https://arxiv.org/pdf/1707.06990.pdf>`_
but slower. Default: *False*. See `"article" <https://arxiv.org/pdf/1707.06990.pdf>`_
"""

def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16),
Expand Down