A record code for "Automated segmentation of median nerve in dynamic sonography using deep learning: Evaluation of model performance".
Current codes are only for inference.
- PyTorch 1.1 or newer version. (for U-Net, FPN, MaskRCNN)
- torchvision 0.3.0 or newer version (for U-Net, FPN, MaskRCNN)
- tensorflow 1.9.0 or newer version (for Deeplabv3+)
- tensorflow 2.X is not supported
- opencv
- pillow
- matplotlib
- pandas
- numpy
Ground truth should follow below polcy.
- Naming: <input_file_name>_mask.png
- Format: png
- A binary map with 1 for Median Nerve pixels, and 0 for background pixels.
Model_zoo will be further updated.
With ground truth
python ./inference_deeplab_option.py --predict_dir <folder_contains_input_images> --model_type unet --backbone <resnet101/resnext101_32x8d> --output_dir <folder_for_output> --gt_dir <folder_contains_ground_truth_masks> --model_path <weights_file>
Without ground truth
python ./inference_option_withoutgt.py --predict_dir <folder_contains_input_images> --model_type unet --backbone <resnet101/resnext101_32x8d> --output_dir <folder_for_output> --model_path <weights_file>
With ground truth
python ./inference_deeplab_option.py --predict_dir <folder_contains_input_images> --model_type fpn --backbone <resnet101/resnext101_32x8d> --output_dir <folder_for_output> --gt_dir <folder_contains_ground_truth_masks> --model_path <weights_file>
Without ground truth
python ./inference_option_withoutgt.py --predict_dir <folder_contains_input_images> --model_type fpn --backbone <resnet101/resnext101_32x8d> --output_dir <folder_for_output> --model_path <weights_file>
Deeplabv3+ was trained with Deeplab project
With ground truth
python ./inference_deeplab.py --predict_dir <folder_contains_input_images> --output_dir <folder_for_output> --gt_dir <folder_contains_ground_truth_masks> --model_path <frozen_graph_file>
Without ground truth
python ./inference_deeplab_withoutgt.py --predict_dir <folder_contains_input_images> --output_dir <folder_for_output> --model_path <frozen_graph_file>
For Mask R-CNN, we use maskrcnn-benchmark