Yankai Jiang, Zhongzhen Huang, Rongzhao Zhang, Xiaofan Zhang, Shaoting Zhang
- [2024/03] ZePT is accepted to CVPR 2024!
- [2024/12] The codes and model weights of ZePT are released!
-
It is recommended to build a Python-3.9 virtual environment using conda
git clone https://github.com/Yankai96/ZePT.git cd ZePT conda env create -f env.yml
- 01 Multi-Atlas Labeling Beyond the Cranial Vault - Workshop and Challenge (BTCV)
- 02 Pancreas-CT TCIA
- 03 Combined Healthy Abdominal Organ Segmentation (CHAOS)
- 04 Liver Tumor Segmentation Challenge (LiTS)
- 05 Kidney and Kidney Tumor Segmentation (KiTS)
- 06 Liver segmentation (3D-IRCADb)
- 07 WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image
- 08 AbdomenCT-1K
- 09 Multi-Modality Abdominal Multi-Organ Segmentation Challenge (AMOS)
- 10 Decathlon (Liver, Lung, Pancreas, HepaticVessel, Spleen, Colon
- 11 CT volumes with multiple organ segmentations (CT-ORG)
- 12 AbdomenCT 12organ
- Please refer to CLIP-Driven to organize the downloaded datasets.
- Modify ORGAN_DATASET_DIR and NUM_WORKER in label_transfer.py
python -W ignore label_transfer.py
We provide the text prompts used for Query-Knowledge Alignment. These texts contain detailed knowledge of each [class name].
The weights used for zero-shot inference are provided in GoogleDrive
- Evaluation
bash scripts/test.sh
If you find ZePT useful, please cite using this BibTeX:
@inproceedings{jiang2024zept,
title={Zept: Zero-shot pan-tumor segmentation via query-disentangling and self-prompting},
author={Jiang, Yankai and Huang, Zhongzhen and Zhang, Rongzhao and Zhang, Xiaofan and Zhang, Shaoting},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={11386--11397},
year={2024}
}The CLIP-Driven-Universal-Model served as the foundational codebase for our work and provided us with significant inspiration!