Skip to content

Commit

Permalink
Merge 94a3e09 into f9653c0
Browse files Browse the repository at this point in the history
  • Loading branch information
mariehbourget committed Aug 2, 2022
2 parents f9653c0 + 94a3e09 commit 3b6d3b5
Show file tree
Hide file tree
Showing 5 changed files with 78 additions and 28 deletions.
32 changes: 28 additions & 4 deletions docs/source/configuration_file.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2278,8 +2278,28 @@ Postprocessing
Evaluation Parameters
---------------------
Dict. Parameters to get object detection metrics (true positive and false detection rates), and this, for defined
object sizes.
Dict. Parameters to get object detection metrics (lesions true positive and false detection rates), and this,
for defined object sizes.

.. jsonschema::

{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "object_detection_metrics",
"$$description": [
"Indicate if object detection metrics (lesions true positive and false detection rates) are computed or not\n",
"at evaluation time. Default: ``true``",
],
"type": "boolean"
}

.. code-block:: JSON
{
"evaluation_parameters": {
"object_detection_metrics": false
}
}
.. jsonschema::
Expand All @@ -2295,7 +2315,8 @@ object sizes.
"These values will create several consecutive target size bins. For instance\n",
"with a list of two values, we will have three target size bins: minimal size\n",
"to first list element, first list element to second list element, and second\n",
"list element to infinity. Default: ``[20, 100]``."
"list element to infinity. Default: ``[20, 100]``.\n",
"``object_detection_metrics`` must be ``true`` for the target_size to apply."
]
},
"unit": {
Expand Down Expand Up @@ -2329,7 +2350,10 @@ object sizes.
"options": {
"thr": {
"type": "int",
"description": "Minimal object size overlapping to be considered a TP, FP, or FN. Default: ``3``."
"$$description": [
"Minimal object size overlapping to be considered a TP, FP, or FN. Default: ``3``.\n",
"``object_detection_metrics`` must be ``true`` for the overlap to apply."
]
},
"unit": {
"type": "string",
Expand Down
17 changes: 13 additions & 4 deletions docs/source/tutorials/two_class_microscopy_seg_2d_unet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,9 @@ microscopy segmentation training.
"train_fraction": 0.6
"test_fraction": 0.1
- ``training_parameters:training_time:num_epochs``: The maximum number of epochs that will be run during training. Each epoch is composed
of a training part and a validation part. It should be a strictly positive integer. In our case, we will use
50 epochs.
- ``training_parameters:training_time:num_epochs``: The maximum number of epochs that will be run during training.
Each epoch is composed of a training part and a validation part. It should be a strictly positive integer.
In our case, we will use 50 epochs.

.. code-block:: xml
Expand All @@ -132,12 +132,21 @@ microscopy segmentation training.
"length_2D": [256, 256]
"stride_2D": [244, 244]
- ``postprocessing:binarize_maxpooling``: Used to binarize predictions across all classes in multiclass models. For each pixel, the class, including the background class, with the highest output probability will be segmented.
- ``postprocessing:binarize_maxpooling``: Used to binarize predictions across all classes in multiclass models.
For each pixel, the class, including the background class, with the highest output probability will be segmented.

.. code-block:: xml
"binarize_maxpooling": {}
- ``evaluation_parameters:object_detection_metrics``: Used to indicate if object detection metrics
(lesions true positive and false detection rates) are computed or not at evaluation time.
For the axons and myelin segmentation task, we set this parameter to ``false``.

.. code-block:: xml
"object_detection_metrics": false
- ``transformation:Resample``: Used to resample images to a common resolution (in mm) before splitting into patches,
according to each image real pixel size. In our case, we resample the images to a common resolution of 0.0001 mm
(0.1 μm) in both dimensions.
Expand Down
1 change: 1 addition & 0 deletions ivadomed/config/config_default.json
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,7 @@
},
"postprocessing": {},
"evaluation_parameters": {
"object_detection_metrics": true
},
"transformation": {
}
Expand Down
3 changes: 3 additions & 0 deletions ivadomed/config/config_microscopy.json
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,9 @@
"postprocessing": {
"binarize_maxpooling": {}
},
"evaluation_parameters": {
"object_detection_metrics": false
},
"transformation": {
"Resample":
{
Expand Down
53 changes: 33 additions & 20 deletions ivadomed/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,24 +86,25 @@ def evaluate(bids_df, path_output, target_suffix, eval_params):
params=eval_params)
results_pred, data_painted = eval.run_eval()

# SAVE PAINTED DATA, TP FP FN
fname_paint = str(fname_pred).split('.nii.gz')[0] + '_painted.nii.gz'
nib_painted = nib.Nifti1Image(
dataobj=data_painted,
affine=nib_pred.header.get_best_affine(),
header=nib_pred.header.copy()
)
nib.save(nib_painted, fname_paint)

# For Microscopy PNG/TIF files (TODO: implement OMETIFF behavior)
if "nii" not in extension:
painted_list = imed_inference.split_classes(nib_painted)
# Reformat target list to include class index and be compatible with multiple raters
target_list = ["_class-%d" % i for i in range(len(target_suffix))]
imed_inference.pred_to_png(painted_list,
target_list,
str(path_preds.joinpath(subj_acq)),
suffix="_pred_painted.png")
if eval_params['object_detection_metrics']:
# SAVE PAINTED DATA, TP FP FN
fname_paint = str(fname_pred).split('.nii.gz')[0] + '_painted.nii.gz'
nib_painted = nib.Nifti1Image(
dataobj=data_painted,
affine=nib_pred.header.get_best_affine(),
header=nib_pred.header.copy()
)
nib.save(nib_painted, fname_paint)

# For Microscopy PNG/TIF files (TODO: implement OMETIFF behavior)
if "nii" not in extension:
painted_list = imed_inference.split_classes(nib_painted)
# Reformat target list to include class index and be compatible with multiple raters
target_list = ["_class-%d" % i for i in range(len(target_suffix))]
imed_inference.pred_to_png(painted_list,
target_list,
str(path_preds.joinpath(subj_acq)),
suffix="_pred_painted.png")

# SAVE RESULTS FOR THIS PRED
results_pred['image_id'] = subj_acq
Expand Down Expand Up @@ -135,6 +136,8 @@ class Evaluation3DMetrics(object):
bin_struct (ndarray): Binary structure.
size_min (int): Minimum size of objects. Objects that are smaller than this limit can be removed if
"removeSmall" is in params.
object_detection_metrics (bool): Indicate if object detection metrics (lesions true positive and false detection
rates) are computed or not.
overlap_vox (int): A prediction and ground-truth are considered as overlapping if they overlap for at least this
amount of voxels.
overlap_ratio (float): A prediction and ground-truth are considered as overlapping if they overlap for at least
Expand Down Expand Up @@ -167,6 +170,8 @@ def __init__(self, data_pred, data_gt, dim_lst, params=None):
self.postprocessing_dict = {}
self.size_min = 0

self.object_detection_metrics = params["object_detection_metrics"]

if "target_size" in params:
self.size_rng_lst, self.size_suffix_lst = \
self._get_size_ranges(thr_lst=params["target_size"]["thr"],
Expand Down Expand Up @@ -389,8 +394,12 @@ def get_ltpr(self, label_size=None, class_idx=0):
label_size (int): Size of label.
class_idx (int): Label index. If monolabel 0, else ranges from 0 to number of output channels - 1.
Note: computed only if n_obj >= 1.
Note: computed only if n_obj >= 1 and "object_detection_metrics" evaluation parameter is True.
"""
if not self.object_detection_metrics:
n_obj = 0
return np.nan, n_obj

ltp, lfn, n_obj = self._get_ltp_lfn(label_size, class_idx)

denom = ltp + lfn
Expand All @@ -406,8 +415,12 @@ def get_lfdr(self, label_size=None, class_idx=0):
label_size (int): Size of label.
class_idx (int): Label index. If monolabel 0, else ranges from 0 to number of output channels - 1.
Note: computed only if n_obj >= 1.
Note: computed only if n_obj >= 1 and "object_detection_metrics" evaluation parameter is True.
"""

if not self.object_detection_metrics:
return np.nan

ltp, _, n_obj = self._get_ltp_lfn(label_size, class_idx)
lfp = self._get_lfp(label_size, class_idx)

Expand Down

0 comments on commit 3b6d3b5

Please sign in to comment.