Automated MIcro-CT Imaging Contouring Engine β a one-command multi-organ segmentation pipeline for mouse micro-CT images, built on Swin UNETR and powered by MONAI.
AutoMICE is the open-source release accompanying:
Robust Automated Mouse Micro-CT Segmentation Using Swin UNEt TRansformers Lu Jiang, Di Xu, Qifan Xu, Arion Chatziioannou, Keisuke S. Iwamoto, Susanta Hui, Ke Sheng. Bioengineering (MDPI), 2024. DOI: https://doi.org/10.3390/bioengineering11121255
It segments 7 mouse organs (plus background) from a single 3D CT volume:
| Index | Organ |
|---|---|
| 0 | background |
| 1 | bladder |
| 2 | lung |
| 3 | heart |
| 4 | liver |
| 5 | intestine |
| 6 | kidney |
| 7 | spleen |
Everything you need lives in two places (no Docker Hub account required):
- π» Source code (this repo):
github.com/namijiang/AutoMICE - π€ Pretrained weights + ready-to-run Docker image:
huggingface.co/namijiang98/AutoMICE - π Try it in the browser (downsampled CPU demo):
huggingface.co/spaces/namijiang98/AutoMICE
- Prerequisites
- Quick start (Docker)
- Don't have Docker? Use the Python CLI
- Critical: input intensity scale
- What is shipped
- Inference parameters (paper, "test4")
- Online demo
- Data preparation
- Citation
- License
For the Docker workflow (recommended):
| Requirement | Notes |
|---|---|
| Linux / macOS / Windows+WSL2 | Ubuntu 20.04+ tested |
| Docker Engine β₯ 20.10 | https://docs.docker.com/engine/install/ |
| (GPU) NVIDIA driver β₯ 515 | Plus nvidia-container-toolkit |
| Disk | ~20 GB free for the image archive + loaded image |
For the Python workflow you only need Python β₯ 3.9 and a CUDA-capable GPU (or CPU for slower runs).
The official Docker image is published as a single automice-image.tar.gz
on the AutoMICE Hugging Face model repo. The bundled installer downloads it,
loads it into your local Docker daemon, and you're ready to segment.
git clone https://github.com/namijiang/AutoMICE.git
cd AutoMICE
./scripts/install_from_hf.sh # ~3.7 GB download, one-time
# Run segmentation
docker run --gpus all --rm \
-v /path/to/inputs:/data \
-v /path/to/outputs:/results \
automice:latest \
--data /data --results /resultsCPU-only systems are also supported (slower):
docker run --rm \
-v /path/to/inputs:/data \
-v /path/to/outputs:/results \
automice:latest \
--data /data --results /results --device cpuThe container exposes two well-known mount points:
| Container path | Role |
|---|---|
/data |
input NIfTI dir |
/results |
output mask dir |
For each input mouse_X.nii.gz you get back mouse_X_seg.nii.gz in
/results. See docs/DOCKER.md for all options and
docs/SOP.md for the full standard operating procedure.
git clone https://github.com/namijiang/AutoMICE.git
cd AutoMICE
pip install -r requirements.txt
pip install -e .
# Pretrained weights (~150 MB) live on Hugging Face:
pip install -U "huggingface_hub[cli]"
hf download namijiang98/AutoMICE model.pt --local-dir ./weights
automice --data ./examples/inputs --results ./examples/outputsA typical session prints:
[automice] Device: cuda
[automice] Loading weights: ./weights/model.pt
[automice] Found 1 volume(s) in ./examples/inputs
[automice] (1/1) Processing ./examples/inputs/CT.nii.gz
[automice] Resampled input shape (B,C,H,W,D): (1, 1, 182, 181, 567)
[automice] Final segmentation shape: (182, 181, 567)
Voxel counts per label:
[0] background 17734544
[1] bladder 3724
[2] lung 62844
[3] heart 28507
[4] liver 195605
[5] intestine 599012
[6] kidney 51790
[7] spleen 2088
[automice] Saved -> ./examples/outputs/CT_seg.nii.gz
The model was trained on CTs in Hounsfield Units (HU) clipped to
[-1000, 5000]. If your scanner exports raw counts (e.g. 0β4095) you
must apply the scanner-specific calibration first, otherwise the
network sees an out-of-distribution histogram and predictions will be poor
or all zeros.
Quick sanity check:
import nibabel as nib, numpy as np
v = nib.load("your_mouse.nii.gz").get_fdata()
print(v.min(), np.percentile(v, [1, 50, 99]), v.max())
# Air should sit near -1000, soft tissue near 0.See docs/DATA_PREPARATION.md for full input
format notes (DICOM β NIfTI conversion, multi-mouse cages, geometry).
AutoMICE/
βββ automice/ # Python package (CLI + library)
β βββ inference.py # main entry point (--data / --results)
β βββ model.py # Swin UNETR factory + weight loader
β βββ preprocess.py # Spacing + intensity transform
β βββ dicom_to_nifti.py # optional DICOM -> NIfTI helper
β βββ utils.py # file scanning, resampling, reporting
β βββ labels.py # 8-class label table
βββ scripts/
β βββ install_from_hf.sh # END-USER installer (download tarball + docker load)
β βββ run_docker.sh # convenience wrapper for ad-hoc runs
β βββ build_docker.sh # MAINTAINER: build the image, bake the checkpoint
β βββ save_docker_image.sh # MAINTAINER: docker save -> tar.gz
β βββ upload_release_to_hf.sh # MAINTAINER: upload weights + tarball to HF
β βββ docker_entrypoint.sh # in-container entry script
βββ docs/
β βββ SOP.md # standard operating procedure (clinical-grade)
β βββ DOCKER.md # all Docker options + troubleshooting
β βββ DATA_PREPARATION.md # input format & DICOM conversion guide
βββ huggingface_demo/ # Gradio app for HF Spaces (`namijiang98/AutoMICE`)
βββ examples/ # how to organise your data (no CT included)
βββ Dockerfile # CUDA 11.7 + PyTorch 2.0 + MONAI 1.2
βββ requirements.txt # Python deps
βββ setup.py # `pip install -e .` -> `automice` CLI
βββ LICENSE # Apache 2.0
βββ CITATION.cff # machine-readable citation metadata
These values are hard-coded as defaults in the CLI and Dockerfile and reproduce the numbers reported in the paper:
| Parameter | Value |
|---|---|
feature_size |
36 |
roi (x, y, z) |
128 Γ 128 Γ 128 |
spacing (mm) |
0.2 Γ 0.2 Γ 0.2 |
| Intensity window | [-1000, 5000] HU β [0, 1] |
infer_overlap |
0.8 |
| Sliding-window mode | Gaussian blending |
To override, pass them on the command line, e.g.
--infer_overlap 0.5 --roi_x 96 --roi_y 96 --roi_z 96.
A lightweight Gradio demo is available at:
Upload a .nii.gz CT and visualise the resulting 8-class segmentation as 2D
slice montages. Because HF free Spaces use CPU only, the demo downsamples
volumes by default β for full-resolution / publication-quality runs use the
Docker image or the local Python CLI.
The Gradio source code lives in huggingface_demo/.
AutoMICE expects NIfTI volumes (.nii or .nii.gz). If your scanner
exports DICOM series, convert them first using the bundled utility:
python -m automice.dicom_to_nifti \
--input /path/to/dicom_root \
--output /path/to/nifti_folderFull details and recommended folder layouts are in
docs/DATA_PREPARATION.md.
Please open an issue at https://github.com/namijiang/AutoMICE/issues. Include:
- the command you ran;
- the stdout/stderr of the container (
docker run ... 2>&1 | tee log.txt); - the shape and value range of one of your input volumes (the snippet in the HU section);
- the AutoMICE version (
docker run --rm automice:latest --helpshows it in the prog string).
If AutoMICE helps your research, please cite our paper and this repository:
@article{jiang2024automice,
title = {Robust Automated Mouse Micro-CT Segmentation Using Swin UNEt TRansformers},
author = {Jiang, Lu and Xu, Di and Xu, Qifan and Chatziioannou, Arion
and Iwamoto, Keisuke S. and Hui, Susanta and Sheng, Ke},
journal = {Bioengineering},
year = {2024},
doi = {10.3390/bioengineering11121255},
url = {https://doi.org/10.3390/bioengineering11121255}
}Please also cite the SwinUNETR papers:
@inproceedings{tang2022self,
title = {Self-supervised pre-training of swin transformers for 3d medical image analysis},
author = {Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and
Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages = {20730--20740},
year = {2022}
}
@article{hatamizadeh2022swin,
title = {Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images},
author = {Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and
Roth, Holger and Xu, Daguang},
journal = {arXiv preprint arXiv:2201.01266},
year = {2022}
}A machine-readable citation file (CITATION.cff) is also
included so tools like GitHub's "Cite this repository" button work out of
the box.
AutoMICE source code and pretrained weights are released under the Apache License 2.0.
Built on top of MONAI and the research-contributions/SwinUNETR reference implementation by NVIDIA / Project MONAI.