Multi-frame Image Registration for Automated Ventricular Function Assessment in Single Breath-hold Cine MRI Using Limited Labels
This work has been accepted for publication in Magnetic Resonance in Medicine
This repository contains the official implementation of a deep learning framework for automated ventricular function assessment from fully sampled and accelerated cine MRI.
Unlike existing approaches that treat registration, reconstruction, and segmentation independently, our method integrates these tasks in a unified framework to leverage their interdependence:
- Multi-frame image registration / The MOtion Propagation Network (MOPNet) encodes temporal dynamics across multiple cardiac frames for consistent motion estimation.
- Joint refinement: motion estimates improve reconstruction and segmentation quality, while segmentation masks provide anatomical guidance for registration.
- Automated functional analysis: volumetric measures and strain are computed directly from accelerated cine MRI, enabling single breath-hold assessments.
- Robust to undersampling: reliable ventricular delineation even at high accelerations (up to R=24).
- Improved segmentation: Dice similarity improved by 9–22% over existing deep learning methods for endocardium, epicardium, and right ventricle.
- Clinical accuracy: Left and right ventricular ejection fraction strongly correlated with manual reference (r > 0.9).
- Consistent strain analysis across accelerations, enabling comprehensive ventricular function assessment.
-
Clone the repository:
git clone https://github.com/lab-midas/MOPNet.git cd MOPNet -
Install the required dependencies:
pip install -r requirements.txt
Before running the training pipeline, set up your JSON configuration file (e.g., configs/train_joint_fully_sampled_data.json). The configuration includes the following sections:
- data_loader & test_data_loader: Specify dataset paths, batch sizes, and other loader parameters.
- model: Define the model architecture and parameters.
- training: Control which stages to train:
reg_train– registration trainingseg_train– segmentation trainingjoint_train– joint refinement of registration and segmentation
- logs: Paths to save or load checkpoints for each stage.
- loss_functions: Define weights for different loss components.
- debug: Set to
trueto enable debug mode (uses fewer samples and allows GPU selection). - wandbkey: Optional API key for logging experiments with Weights & Biases.
The repository contains several scripts for motion estimation, segmentation and reconstruction as well as strain analysis and flow field visualization. Key scripts include:
This script trains the MOPSegNet model for multi-frame registration and segmentation of cine MRI.
Features:
- Loads training and test datasets using configuration from a JSON file.
- Sequentially trains:
- Registration network
- Segmentation network
- Joint fine-tuning of both networks
- Saves model checkpoints at each stage.
- Supports debug mode and Weights & Biases (W&B) logging.
Inputs:
- JSON config file specifying dataset paths, model parameters, and training options.
Outputs:
- Model checkpoints for registration, segmentation, and joint training.
This script predicts inter-frame motion fields from cine MRI using a trained MOPSegNet model.
Features:
- Loads a list of subjects and cine data (H5 or pre-reconstructed
.npyfiles) from CSV and directories. - Supports multiple acceleration factors (R values).
- Predicts forward and backward motion between all frame pairs using neighboring-frame context.
- Preprocesses images, crops outputs, and optionally visualizes flow fields.
- Saves predicted motion fields per slice for further reconstruction or analysis.
Inputs:
- CSV file with subject IDs
- H5 dataset or pre-reconstructed images
- Trained model checkpoint (
.pth) - JSON configuration file specifying model parameters
Outputs:
- Numpy arrays of predicted motion fields per slice and subject.
- Optional visualization of flow fields (requires
flow_vis).
This script performs motion-constrained MRI reconstruction using predicted optical flow fields.
Features:
- Loads subject cine MRI data (H5 format) and previously predicted motion fields.
- Supports accelerated imaging (various R values) with sampling masks.
- Uses low-rank + spatiotemporal TV regularization for reconstruction.
- Performs motion-constrained reconstruction using forward and adjoint motion operators.
- Saves magnitude images per slice and subject.
- Optional visualization of reconstructed slices.
- Tracks errors for slices that fail reconstruction.
Inputs:
- CSV file listing subject IDs
- H5 dataset with complex cine images
- Predicted motion fields from MOPNet
- Sampling mask (generated or loaded from file)
- Model parameters (regularization weights, number of iterations)
Outputs:
- Reconstructed images per slice and subject saved as
.npyfiles - Error log for failed reconstructions
This project was inspired and developed with the help of the following repositories:
- DeepStrain – for strain analysis techniques.
- VideoFlow – for the motion estimation method.
We gratefully acknowledge the authors for making their code publicly available, which greatly facilitated this work.