Skip to content

Releases: NKI-AI/direct

DIRECT v2.0.0 Release Notes

03 Apr 12:51
Compare
Choose a tag to compare

We're excited to announce DIRECT v2.0.0, featuring several advancements and updates. Here's a snapshot of what's new:

  • New Features: New MRI transforms, datasets, loss functions and models including challenge winning models.
  • User Experience Enhancements: Updated commands and additional examples
  • Performance Optimizations: Addressed memory and performance issues.
  • Code Quality Enhancements: Significant improvements for a more robust and reliable codebase.

Dive into the details below to see how DIRECT v2.0.0 can enhance your work.

Major Updates Since v1.0.0

  • Major New Features:
    • Additional MRI transforms (#210, #226, #233, #235)
    • Additional Loss functions (#226, #262)
    • Additional MRI models including challenge winning models (RecurrentVarNet winner at MC-MRI challenge 2022, vSHARPNet winner at CMRxRecon challenge 2023) (#156, #180, #228, #271, #273)
    • Additional subsampling functions, including Variable density Poisson, Equispaced with exploited symmetry, Gaussian 1D and Gaussian 2D (#216, #230)
    • Additional (Shepp Logan phantom) dataset (#202)
    • 3D functionality including transforms and vSHARP 3D (#272, #273)
  • User Experience Improvements:
    • Refactored direct train and direct predict commands (#202)
    • Add experiment configurations and/or examples (#180, #199, #271)
  • Performance Improvements:
    • Fix high memory consumption caused by h5py (#174)
    • Fix RIM performance (#208)
  • Code quality changes (#194, #196, #226, #228, #266)

Changes Since v1.0.4

New Features

  • New MRI model architectures: including ConjGradNet for improved imaging, IterDualNet (similar to JointICNet without sensitivity map optimization), ResNet as a new denoiser model, VarSplitNet for variable splitting optimization with deep learning (#228) and VSharpNet as presented in vSHARP: variable Splitting Half-quadratic ADMM algorithm for Reconstruction of inverse-Problems along with its 3D variant VSharpNet3D (#270, #273).
  • New MRI transforms: including EspiritCalibration transform via power-method algorithm, CropKspace, RandomFlip, RandomRotation., ComputePadding, ApplyPadding, ComputeImage , RenameKeys, ComputeScalingFactor.
  • New functionals and loss functions: NMSE, NRMSE, NMAE, SobelGradL1Loss, SobelGradL2Loss, hfen_l1, hfen_l2, HFENLoss, HFENL1Loss, HFENL2Loss, snr, SNRLoss and SSIM3DLoss (#226, #262).
  • New masking functions:
    • Gaussian in 1D (rectilinear sampling) and in 2D (point sampling): Gaussian1DMaskFunc and Gaussian2DMaskFunc, respectively. Implemented using Cython (#230).
  • 3D MRI Reconstruction functionality:
    • MRI transforms extended to work for input 3D k-space data, in which the third dimension represents slice or time (#272).
    • Implemented 3D variant of vSHARP, VSharpNet3D, (#273).

Improvements

  • Refactored MRI model engines to only implement forward_method instead of _do_iteration. (#226)
  • Transforms configuration for training and inference now implemented by flattening input DictConfig from omegaconf using dict_flatten (#235, #250).

Code Quality Changes

  • Minor quality improvements (#226).
  • Introduction of DirectEnum as a base class for clean typing of options of modules such as transforms, etc (#228, #266).

Other Changes

  • New version of black reformatting (#241)
  • Update for new versions of tooling packages (#263)
  • Updated documentation (#226, #242 - #272)

Acknowledgments

This release was made possible by the hard work and dedication of our team and contributors:

Documentation and Changelogs

Access detailed documentation for DIRECT v2.0.0 at our documentation site.

New mri transform features, new loss functions, improving code quality fixes and lr scheduler fix

19 Oct 15:04
Compare
Choose a tag to compare

New features

  • New training losses implemented (NMSE, NRMSE, NMAE, SobelGradL1Loss, SobelGradL2Loss) and also k-space losses (#226)
  • New mri tranforms
    • ComputeZeroPadding: computes padding in k-space input (#226)
    • ApplyZeroPadding: applies padding (#226)
    • ComputeImage: computes image from k-space input (#226)
    • RenameKeys: rename keys in input (#226)
    • CropKSpace: transforms k-space to image domain, crops it, and backprojects it (#210)
    • ApplyMask: applies sampling mask (#210)
  • New sub-sampling patterns (#216):
    • Variable density Poisson masking function (cython implementation)
    • Fastmri's Magic masking function
  • New model added: CRIM (#156)

Code quality

  • mri_models performs _do_iteration method, child engines perform forward_function which returns output_image and/or output_kspace (#226)

Bufixes

  • torch.where output needed to be made contiguous to be inputted to fft, due to new torch version (#216)
  • HasStateDict type changed to include torch.optim.lr_scheduler._LRScheduler which was missing before, causing the checkpointer to not save/load the state_dict of LRSchedulers (#218)

Contributors

Full Changelog: v1.0.3...v1.0.4

Fixed bugs affecting RIM performance

24 Oct 10:47
Compare
Choose a tag to compare

What's Changed

  • RIM performance fixed (#208)
    • modulus_if_complex reinstated but requiring to set complex axis
  • MRI models metrics in MRIEngine check if prediction is complex (complex axis=last) and apply the modulus if they are (#208).

Contributors

Full Changelog: v1.0.2...v1.0.3

CVPR experiments, SheppLogan datasets, New training command

19 Oct 15:07
Compare
Choose a tag to compare

New features

  • Normalised ConvGRU model (NormConv2dGRU) following the implementation of NormUnet2d (#176)
  • Shepp Logan Datasets based on "2D & 3D Shepp-Logan phantom standards for MRI", 2008 19th International Conference on Systems Engineering. IEEE, 2008. (#202):
    • SheppLoganProtonDataset
    • SheppLoganT1Dataset
    • SheppLoganT2Dataset
  • Sensitivity map simulator by producing Gaussian distributions with number of centers = number of desired coils (#202)
  • Documentation updates (#180, #183, #196)
  • Experiments for our CVPR 2022 paper "Recurrent Variational Network: A Deep Learning Inverse Problem Solver applied to the task of Accelerated MRI Reconstruction" as shown in the paper (#180)
  • Tutorials/examples for Calgary Campinas Dataset and Google Colab added (#199)

Code quality

  • Remove unambiguous complex assertions (#194)
  • modulus_if_complex function removed, modulus needs to specify axis (#194)
  • Added tests/end-to-end tests. Coverage to 81% (#196)
  • Improve typing (#196)
  • mypy and pylint fixes (#196)
  • Docker image updated (#204)
  • Refactored direct train, direct predict and python3 projects/predict_val.py to not necessarily require path to data as some datasets don't require it (e.g. SheppLogan Datasets) - build_dataset_from_input relies on **kwargs now. Refactored configs and docs to comply with the above. (#202)
    • Train command example:

      direct train <experiment_directory> --num-gpus <number_of_gpus> --cfg <path_or_url_to_yaml_file> \ [--training-root <training_data_root> --validation-root <validation_data_root>] [--other-flags]

Bufixes

  • Minor update in Normalize transform due to new version dependancy (#177)
  • Minor bug fixes (#196)

Contributors

Full Changelog: v1.0.1...v1.0.2

Bug fix release reducing memory consumption and improving code quality fixes

22 Feb 22:59
Compare
Choose a tag to compare

In v1.0.1 we mainly provide bug fixes (including a memory leak) and code quality improvements.

New features

Code Quality Changes

  • Fix high memory consumption and code quality improvements by @georgeyiasemis in #174
    • h5py 3.6.0 to h5py 3.3.0
    • FastMRIDataset header reader function refactored
    • MRI model engines now only perform do_iteration method. MRIModelEngine now includes all methods for MRI models.
    • evaluate method of MRI Models has been rewritten.

Full Changelog: v1.0.0...v1.0.1

First stable release including baselines

07 Jan 14:29
d472bdd
Compare
Choose a tag to compare

In the first stable release, we provide the implementation of baselines, including trained models on the publicly available datasets.

New features

Code quality

  • Removed experimental named tensor feature, enabling the update to pytorch 1.9 (PR #103)
  • Remove large files from the repository and store these in an S3 bucket and are downloaded when used.
  • Code coverage checks and badges are added (#153)
  • Add several tests, code coverage is now to 73% (#144)
  • Tests are now in a separate folder (#142)
  • Outdated checkpoints are removed (#146)
  • New models are added, requiring that MRIReconstruction is merged with RIM (#113)
  • Allow reading checkpoints from S3 storage (#133, Closes #135)
  • Allow for remote config files (#133, Closes #135)

Documentation

Internal changes

  • Experimental named tensors are removed (PR #103)
  • Pytorch 1.9 and Python 3.8 are now required.

Bugfixes

  • Evaluation function had a problem where the last volume sometimes was dropped (#111)
  • Checkpointer tried to load state_dict if key is of the format __<>__ (#144 closes #143)
  • Fixed crash when validation set is empty (#125)

New Contributors

Full Changelog: v0.2...v1.0.0

v0.2

28 Oct 19:37
Compare
Choose a tag to compare
v0.2 Pre-release
Pre-release

Major release with many bug fixes, and baseline models

Many new features have been added, of which most will likely have introduced breaking changes. Several performance
issues have been addressed.

An improved version to the winning solution for the Calgary-Campinas challenge is also added to v0.2, including model weights.

New features

  • Baseline model for the Calgary-Campinas challenge (see model_zoo.md)
  • Added FastMRI 2020 dataset.
  • Challenge metrics for FastMRI and the Calgary-Campinas.
  • Allow initialization from zero-filled or external input.
  • Allow initialization from something else than the zero-filled image in train_rim.py by passing a directory.
  • Refactoring environment class allowing the use of different models except RIM.
  • Added inference key to the configuration which sets the proper transforms to be used during training, this became
    necessary when we introduced the possibility to have multiple training and validation sets, created a inference script
    honoring these changes.
  • Separate validation and testing scripts for the Calgary-Campinas challenge.

Technical changes in functions

  • direct.utils.io.write_json serializes real-valued numpy and torch objects.
  • direct.utils.str_to_class now supports partial argument parsing, e.g. fft2(centered=False) will be properly parsed
    in the config.
  • Added support for regularizers.
  • Engine is now aware of the backward and forward operators, these are not passed in the environment anymore, but are
    properties of the engine.
  • PyTorch 1.6 and Python 3.8 are now required.

Work in progress

  • Added a preliminary version of a 3D RIM version. This includes changing several transforms to versions being dimension
    independent and also intends to support 3D + time data.

Bugfixes

  • Fixed progressive slowing down during validation by refactoring engine and turning lists of dataloaders
    into a generator, also disabled memory pinning to alleviate this problem.
  • Fixed a bug that when initializing from a previous checkpoint additional models were not loaded.
  • Fixed normalization of the sensitivity map in rim_engine.py.
  • direct.data.samplers.BatchVolumeSampler returned wrong length which caused dropping of volumes during validation.

Necessary bug fixes and logging improvements

16 Aug 10:13
Compare
Choose a tag to compare

In this version we provide necessary bugfixes for v0.1.1 and several improvements:

Big changes

  • Bugfixes in FastMRI dataset class.
  • Improvements in logging.
  • Improvements by adding more exceptions for unexpected occurrences.
  • Allow the reading of subsets of the dataset by providing lists.
  • Add an augmentation to pad coils (zero-padding), allowing the batching of images with a different number of coils.
  • Add the ability to add additional models to the engine class by configuration (WIP).

Stylistic changes

Black is now used as a code formatter, this had the consequence that certain parts became hard to read, so these were refactored to improve readability after applying black.

Breaking changes

As you can expect from a pre-release, while we intend to keep it to a minimum, it is possible that things break, especially in the configuration files. In case you encounter one, please open an issue.

Pytorch 1.6, mixed precision update

03 Aug 09:36
Compare
Choose a tag to compare
Pre-release

In v0.1.1:

  • The pytorch version has been updated to 1.6 (also in the Dockerfile) (and pytorch 1.6 is now required)
  • Mixed precision support with the --mixed-precision flag. If you have supporting hardware this can speed up training with more than 33% and reduce memory by more than 40%.

First feature complete version for recurrent inference machines

28 Jul 12:22
Compare
Choose a tag to compare

In this release, we have significantly expanded upon v0.0.1 and:

  • The logging has been made more elaborate (better sorting in tensorboard, more descriptive console output).
  • Metrics and losses are configurable and are logged separately.
  • Training works for both FastMRI and the Calgary-Campinas datasets.
  • Winning solution for the Calgary-Campinas challenge included.