Skip to content

Version 3.1.0

Compare
Choose a tag to compare
@danifranco danifranco released this 07 Nov 16:04
· 477 commits to master since this release

New functionality added:

General

Major changes

  • Add ResUNet++ model
  • Add TEST.POST_PROCESSING.REMOVE_BY_PROPERTIES, and its options, to remove instances by the conditions based in each instance properties. This merges PROBLEM.INSTANCE_SEG.WATERSHED_CIRCULARITY, PROBLEM.INSTANCE_SEG.DATA_REMOVE_SMALL_OBJ_AFTER and PROBLEM.INSTANCE_SEG.DATA_REMOVE_SMALL_OBJ_AFTER functionalities.
  • New options and upgrades to save memory:
    • Move normalization to load_sample function inside the generators if DATA.*.IN_MEMORY is selected, which allows to have in memory the dataset in its original dtype (usuarlly uint8 or uint16) and not in float32, consuming less memory, at the cost of having to do the normalization per batch.
    • UpdateTEST.REDUCE_MEMORY option to reduce also the dtype of the prediction from float32 to float16
    • Add TEST.BY_CHUNKS, and its options, to process large images by chunks: load/save steps work with H5 or Zarr formats. This option helps to generate model's prediction with overlap/padding with low memory footprint by constructing it patch by patch. It is also prepared to do multi-GPU inference to accelerate the reconstruction process. It can also work loading TIF images but with H5 and Zarr only the patches processed are loaded into memory, and nothing else, so you can should scale to TB of data without having memory problems.
    • Add TEST.BY_CHUNKS.WORKFLOW_PROCESS, and a few more options related to it, to continue or not the workflow normal steps after the model prediction. With TEST.BY_CHUNKS.WORKFLOW_PROCESS.TYPE you can tell the worklow to process the predicted image patch by patch or as just one image. By patch option is currently only supported in DETECTION workflow.

Minor changes

  • Delete MODEL.KERNEL_INIT
  • TRAIN.PATIENCE default changed to -1
  • Add utils/scripts/h5_to_zarr.py auxiliary script
  • Now warmupcosinelearning rate scheduler is done by iterations and not by epochs.
  • Update notebooks to work with BiaPy based on Pytorch

Workflows

Instance segmentation

  • Add TEST.POST_PROCESSING.CLEAR_BORDER to remove instances in the border

Denoising

  • Change N2V masks to be created always on the fly (saving memory)

Detection

  • Remove TEST.DET_LOCAL_MAX_COORDS option
  • Add TEST.DET_POINT_CREATION_FUNCTION, and a few more options related to it, to decide whether to use peak_local_max or blob_log (from scikit-image) functions to create the final points from probabilities.

SSL

  • Add MODEL.MAE_MASK_RATIO option

SR

  • Add 3D support
  • Add notebooks

Bugs fixed:

  • Correct bug on 2D UNETR definition
  • Fix bug in 2D cross validation
  • Minor bugs created when switching from Tensorflow to Pytorch