Multitask Universal Lesion Analysis Network (MULAN)
This project contains the code of the MICCAI 2019 paper: “MULAN: Multitask Universal Lesion Analysis Network for Joint Lesion Detection, Tagging, and Segmentation” .
MULAN  can detect a variety of lesions in CT images, predict multiple tags (body part, type, attributes) for each lesion, and segment it as well. It is designed based on the mask RCNN framework with a 3D feature fusion strategy and three head branches. It was trained on the DeepLesion [2,3] dataset.
This code was adapted from Facebook's maskrcnn-benchmark. We thank them for their excellent project.
- PyTorch 1.1, torchvision 0.3.0
- Python 3.6
- If you just want to test our model, a MULAN model  trained on DeepLesion is here. you can put it in
checkpoints/. Note that the released model does not have additional feature input (age, sex, etc.) in the refine layer, so its accuracy on DeepLesion is different from .
- If you want to train your own model, the DeepLesion dataset [2,3] is needed. Download it and modify the path in
- To train the tag head, you also need the lesion tags and ontology (
hand_labeled_test_set.json) from here. Put the 3 files in the
config.ymlto set mode (see below) and parameters;
- demo: Use a trained checkpoint, input the path of a nifti image file in terminal, get prediction results (overlaid images) in the
- batch: Similar to demo, except that you input a folder that contains multiple nifti files. The predictions are stored in a PyTorch
pthfile. You can set some parameters of these two modes in
- train: Train and validate MULAN on a dataset. Currently the codes are designed for the DeepLesion dataset.
- eval: Evaluate a trained model on DeepLesion.
- vis: Visualize test results on DeepLesion.
- Because of the complexity of the universal lesion analysis (detection + tagging + segmentation) task and the limitation of the training data, the results may still not be perfect. For example, lesion detection results may contain false positives (nonlesions that look like lesions).
- MULAN was trained on lesions in DeepLesion, so it may be inaccurate on body regions and lesions that are rare in DeepLesion.
- K. Yan, Y. B. Tang, Y. Peng, V. Sandfort, M. Bagheri, Z. Lu, and R. M. Summers, “MULAN: Multitask Universal Lesion Analysis Network for Joint Lesion Detection, Tagging, and Segmentation,” in International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2019. (arXiv)
- The DeepLesion dataset. (download)
- K. Yan, X. Wang, L. Lu, and R. M. Summers, “DeepLesion: Automated Mining of Large-Scale Lesion Annotations and Universal Lesion Detection with Deep Learning,” J. Med. Imaging, 2018. (paper)
- K. Yan, Y. Peng, V. Sandfort, M. Bagheri, Z. Lu, and R. M. Summers, “Holistic and Comprehensive Annotation of Clinically Significant Findings on Diverse CT Images: Learning from Radiology Reports and Label Ontology,” in CVPR, 2019. (arXiv)