Skip to content

namijiang/AutoMICE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AutoMICE

License DOI Hugging Face Model Hugging Face Spaces Built on MONAI

Automated MIcro-CT Imaging Contouring Engine β€” a one-command multi-organ segmentation pipeline for mouse micro-CT images, built on Swin UNETR and powered by MONAI.

AutoMICE is the open-source release accompanying:

Robust Automated Mouse Micro-CT Segmentation Using Swin UNEt TRansformers Lu Jiang, Di Xu, Qifan Xu, Arion Chatziioannou, Keisuke S. Iwamoto, Susanta Hui, Ke Sheng. Bioengineering (MDPI), 2024. DOI: https://doi.org/10.3390/bioengineering11121255

It segments 7 mouse organs (plus background) from a single 3D CT volume:

Index Organ
0 background
1 bladder
2 lung
3 heart
4 liver
5 intestine
6 kidney
7 spleen

Everything you need lives in two places (no Docker Hub account required):


Table of contents


βœ… Prerequisites

For the Docker workflow (recommended):

Requirement Notes
Linux / macOS / Windows+WSL2 Ubuntu 20.04+ tested
Docker Engine β‰₯ 20.10 https://docs.docker.com/engine/install/
(GPU) NVIDIA driver β‰₯ 515 Plus nvidia-container-toolkit
Disk ~20 GB free for the image archive + loaded image

For the Python workflow you only need Python β‰₯ 3.9 and a CUDA-capable GPU (or CPU for slower runs).


πŸš€ Quick start (Docker, recommended)

The official Docker image is published as a single automice-image.tar.gz on the AutoMICE Hugging Face model repo. The bundled installer downloads it, loads it into your local Docker daemon, and you're ready to segment.

git clone https://github.com/namijiang/AutoMICE.git
cd AutoMICE
./scripts/install_from_hf.sh           # ~3.7 GB download, one-time

# Run segmentation
docker run --gpus all --rm \
    -v /path/to/inputs:/data \
    -v /path/to/outputs:/results \
    automice:latest \
    --data /data --results /results

CPU-only systems are also supported (slower):

docker run --rm \
    -v /path/to/inputs:/data \
    -v /path/to/outputs:/results \
    automice:latest \
    --data /data --results /results --device cpu

The container exposes two well-known mount points:

Container path Role
/data input NIfTI dir
/results output mask dir

For each input mouse_X.nii.gz you get back mouse_X_seg.nii.gz in /results. See docs/DOCKER.md for all options and docs/SOP.md for the full standard operating procedure.


🐍 Don't have Docker? Use the Python CLI

git clone https://github.com/namijiang/AutoMICE.git
cd AutoMICE
pip install -r requirements.txt
pip install -e .

# Pretrained weights (~150 MB) live on Hugging Face:
pip install -U "huggingface_hub[cli]"
hf download namijiang98/AutoMICE model.pt --local-dir ./weights

automice --data ./examples/inputs --results ./examples/outputs

A typical session prints:

[automice] Device: cuda
[automice] Loading weights: ./weights/model.pt
[automice] Found 1 volume(s) in ./examples/inputs
[automice] (1/1) Processing ./examples/inputs/CT.nii.gz
[automice]   Resampled input shape (B,C,H,W,D): (1, 1, 182, 181, 567)
[automice]   Final segmentation shape: (182, 181, 567)
Voxel counts per label:
  [0] background  17734544
  [1] bladder         3724
  [2] lung           62844
  [3] heart          28507
  [4] liver         195605
  [5] intestine     599012
  [6] kidney         51790
  [7] spleen          2088
[automice]   Saved -> ./examples/outputs/CT_seg.nii.gz

⚠️ Critical: your CT must be in Hounsfield Units

The model was trained on CTs in Hounsfield Units (HU) clipped to [-1000, 5000]. If your scanner exports raw counts (e.g. 0–4095) you must apply the scanner-specific calibration first, otherwise the network sees an out-of-distribution histogram and predictions will be poor or all zeros.

Quick sanity check:

import nibabel as nib, numpy as np
v = nib.load("your_mouse.nii.gz").get_fdata()
print(v.min(), np.percentile(v, [1, 50, 99]), v.max())
# Air should sit near -1000, soft tissue near 0.

See docs/DATA_PREPARATION.md for full input format notes (DICOM β†’ NIfTI conversion, multi-mouse cages, geometry).


πŸ“¦ What is shipped

AutoMICE/
β”œβ”€β”€ automice/                    # Python package (CLI + library)
β”‚   β”œβ”€β”€ inference.py             # main entry point (--data / --results)
β”‚   β”œβ”€β”€ model.py                 # Swin UNETR factory + weight loader
β”‚   β”œβ”€β”€ preprocess.py            # Spacing + intensity transform
β”‚   β”œβ”€β”€ dicom_to_nifti.py        # optional DICOM -> NIfTI helper
β”‚   β”œβ”€β”€ utils.py                 # file scanning, resampling, reporting
β”‚   └── labels.py                # 8-class label table
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ install_from_hf.sh       # END-USER installer (download tarball + docker load)
β”‚   β”œβ”€β”€ run_docker.sh            # convenience wrapper for ad-hoc runs
β”‚   β”œβ”€β”€ build_docker.sh          # MAINTAINER: build the image, bake the checkpoint
β”‚   β”œβ”€β”€ save_docker_image.sh     # MAINTAINER: docker save -> tar.gz
β”‚   β”œβ”€β”€ upload_release_to_hf.sh  # MAINTAINER: upload weights + tarball to HF
β”‚   └── docker_entrypoint.sh     # in-container entry script
β”œβ”€β”€ docs/
β”‚   β”œβ”€β”€ SOP.md                   # standard operating procedure (clinical-grade)
β”‚   β”œβ”€β”€ DOCKER.md                # all Docker options + troubleshooting
β”‚   └── DATA_PREPARATION.md      # input format & DICOM conversion guide
β”œβ”€β”€ huggingface_demo/            # Gradio app for HF Spaces (`namijiang98/AutoMICE`)
β”œβ”€β”€ examples/                    # how to organise your data (no CT included)
β”œβ”€β”€ Dockerfile                   # CUDA 11.7 + PyTorch 2.0 + MONAI 1.2
β”œβ”€β”€ requirements.txt             # Python deps
β”œβ”€β”€ setup.py                     # `pip install -e .` -> `automice` CLI
β”œβ”€β”€ LICENSE                      # Apache 2.0
└── CITATION.cff                 # machine-readable citation metadata

πŸ§ͺ Inference parameters (paper, "test4")

These values are hard-coded as defaults in the CLI and Dockerfile and reproduce the numbers reported in the paper:

Parameter Value
feature_size 36
roi (x, y, z) 128 Γ— 128 Γ— 128
spacing (mm) 0.2 Γ— 0.2 Γ— 0.2
Intensity window [-1000, 5000] HU β†’ [0, 1]
infer_overlap 0.8
Sliding-window mode Gaussian blending

To override, pass them on the command line, e.g. --infer_overlap 0.5 --roi_x 96 --roi_y 96 --roi_z 96.


🌐 Online demo (Hugging Face Spaces)

A lightweight Gradio demo is available at:

https://huggingface.co/spaces/namijiang98/AutoMICE

Upload a .nii.gz CT and visualise the resulting 8-class segmentation as 2D slice montages. Because HF free Spaces use CPU only, the demo downsamples volumes by default β€” for full-resolution / publication-quality runs use the Docker image or the local Python CLI.

The Gradio source code lives in huggingface_demo/.


πŸ“‘ Data preparation

AutoMICE expects NIfTI volumes (.nii or .nii.gz). If your scanner exports DICOM series, convert them first using the bundled utility:

python -m automice.dicom_to_nifti \
    --input  /path/to/dicom_root \
    --output /path/to/nifti_folder

Full details and recommended folder layouts are in docs/DATA_PREPARATION.md.


πŸ’¬ Bug reports / questions

Please open an issue at https://github.com/namijiang/AutoMICE/issues. Include:

  • the command you ran;
  • the stdout/stderr of the container (docker run ... 2>&1 | tee log.txt);
  • the shape and value range of one of your input volumes (the snippet in the HU section);
  • the AutoMICE version (docker run --rm automice:latest --help shows it in the prog string).

πŸ“š Citation

If AutoMICE helps your research, please cite our paper and this repository:

@article{jiang2024automice,
  title   = {Robust Automated Mouse Micro-CT Segmentation Using Swin UNEt TRansformers},
  author  = {Jiang, Lu and Xu, Di and Xu, Qifan and Chatziioannou, Arion
             and Iwamoto, Keisuke S. and Hui, Susanta and Sheng, Ke},
  journal = {Bioengineering},
  year    = {2024},
  doi     = {10.3390/bioengineering11121255},
  url     = {https://doi.org/10.3390/bioengineering11121255}
}

Please also cite the SwinUNETR papers:

@inproceedings{tang2022self,
  title     = {Self-supervised pre-training of swin transformers for 3d medical image analysis},
  author    = {Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and
               Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages     = {20730--20740},
  year      = {2022}
}

@article{hatamizadeh2022swin,
  title   = {Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images},
  author  = {Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and
             Roth, Holger and Xu, Daguang},
  journal = {arXiv preprint arXiv:2201.01266},
  year    = {2022}
}

A machine-readable citation file (CITATION.cff) is also included so tools like GitHub's "Cite this repository" button work out of the box.


πŸ”’ License

AutoMICE source code and pretrained weights are released under the Apache License 2.0.


πŸ™ Acknowledgements

Built on top of MONAI and the research-contributions/SwinUNETR reference implementation by NVIDIA / Project MONAI.

About

Automated Mouse Micro-CT Segmentation

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors