CellSAM2 is an adaptation of Meta's SAM 2 (Segment Anything Model 2) for biological cell segmentation and tracking in microscopy images and videos. This project extends SAM 2's capabilities to handle the unique challenges of cell segmentation and tracking, including cell division events, varying morphologies, and dense cell populations.
CellSAM2 is memory constrained to work on larger images. Future work will entail adapting CellSAM2 to handle larger images.
- Run training Colab notebook in full
- Run inference Colab notebook in full
- Save MOMA and DynamicNuclearNet model checkpoints on Zenodo
- Train segmentation model on moma as proof of concept
CellSAM2 leverages the powerful foundation of SAM 2 and adapts it specifically for:
- Accurate cell instance segmentation in microscopy images
- Robust cell tracking across video frames
- Handling cell division events
- Processing Cell Tracking Challenge (CTC) formatted data
It currently supports CTC-formatted data for training and CTC-formatted data or TIFF files for inference.
CellSAM2 achieves state-of-the-art performance on the Moma and DynamicNuclearNet dataset.
CellSAM2 performance metrics on the MoMA dataset
CellSAM2 performance metrics on the DynamicNuclearNet dataset
| Task | Link |
|---|---|
| 🔧 Training | Colab Notebook |
| 📈 Inference | Colab Notebook - Coming Soon |
CellSAM2 supports data in the Cell Tracking Challenge (CTC) format. For detailed data format specifications and examples, see training/README.md.
CellSAM2 requires Python >=3.10, PyTorch >=2.5.1, and TorchVision >=0.20.1.
# Clone the repository
git clone https://github.com/yourusername/CellSam2.git
cd CellSam2
# Create and activate a conda environment (recommended)
conda create -n cellsam2 python=3.10
conda activate cellsam2
# Install CellSAM2
pip install -e .For development, install with the dev extras:
pip install -e ".[dev]" # or "[notebooks,dev]" if you need notebook supportThen set up the pre-commit hooks:
pre-commit installDownload the pretrained SAM2 model checkpoints:
cd checkpoints
./download_ckpts.sh
cd ..You can train CellSAM2 for either tracking or segmentation tasks. The default mode is tracking.
# For cell tracking (default)
python train_ctc.py \
launcher.experiment_log_dir=sam2_logs/CellSam2-tracking \
scratch.dataset_name={dataset} \
dataset.data_dir={dataset_path}
# For cell segmentation (single frame)
python train_ctc.py \
launcher.experiment_log_dir=sam2_logs/CellSam2-tracking \
scratch.dataset_name={dataset} \
dataset.data_dir={dataset_path} \
scratch.num_frames=1For detailed training options and configurations, see training/README.md.
The basic command structure for inference is:
python inference/track_cells.py --video_path PATH --model_name MODEL --res_path OUTPUTNote: Output is currently in CTC format only.
For cell tracking:
python inference/track_cells.py \
--video_path data/moma/test/CTC \
--model_name CellSam2-model \
--res_path results/trackingFor cell segmentation:
python inference/track_cells.py \
--video_path data/moma/test/CTC \
--model_name CellSam2-model \
--res_path results/segmentation \
--segment TrueRequired arguments:
--video_path: Path to your CTC-formatted data--res_path: Output directory for results
Optional arguments:
--model_name: Name of the model to use (default: "CellSAM2-model")--segment: Enable segmentation mode (default: False)--checkpoint_num: Specific checkpoint number to use (default: None, uses latest)--use_heatmap: Whether to use heatmap prediction (default: True)
Advanced parameters:
--box_nms_thresh: Non-maximum suppression threshold for bounding boxes (default: 0.5)--pred_iou_thresh: IoU threshold for predictions (default: 0.7)
This project builds upon SAM 2. If you use CellSAM2, please cite the original SAM 2 paper:
@article{ravi2024sam2,
title={SAM 2: Segment Anything in Images and Videos},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint arXiv:2408.00714},
url={https://arxiv.org/abs/2408.00714},
year={2024}
}If you find CellSAM2 useful in your research, please consider citing:
@software{CellSAM2,
title={CellSAM2: Cell Segmentation and Tracking with SAM-2},
author={Owen O'Connor},
year={2025},
url={https://github.com/owen24819/CellSam2},
note={GitHub repository}
}
