diff --git a/README.md b/README.md
index 23729c3f..6ba3decc 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# CellSeg3D: self-supervised (and supervised) 3D cell segmentation, primarily for mesoSPIM data!
-[](https://www.napari-hub.org/plugins/napari_cellseg3d)
+[](https://www.napari-hub.org/plugins/napari_cellseg3d)
[](https://pypi.org/project/napari-cellseg3d)
[](https://pepy.tech/project/napari-cellseg3d)
[](https://pepy.tech/project/napari-cellseg3d)
@@ -22,9 +22,7 @@
## Documentation
-📚 Documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D
-](https://adaptivemotorcontrollab.github.io/CellSeg3D/welcome.html)
-
+📚 Documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D](https://adaptivemotorcontrollab.github.io/CellSeg3D/welcome.html)
📚 For additional examples and how to reproduce our paper figures, see: [https://github.com/C-Achard/cellseg3d-figures](https://github.com/C-Achard/cellseg3d-figures)
@@ -38,7 +36,7 @@ To use the plugin, please run:
```
napari
```
-Then go into `Plugins > napari_cellseg3d`, and choose which tool to use.
+Then go into `Plugins > napari_cellseg3d`, and choose which tool to use.
- **Review (label)**: This module allows you to review your labels, from predictions or manual labeling, and correct them if needed. It then saves the status of each file in a csv, for easier monitoring.
- **Inference**: This module allows you to use pre-trained segmentation algorithms on volumes to automatically label cells and compute statistics.
@@ -64,7 +62,11 @@ F1-score is computed from the Intersection over Union (IoU) with ground truth la
## News
-**New version: v0.2.2**
+### **CellSeg3D now published at eLife**
+
+Read the [article here !](https://elifesciences.org/articles/99848)
+
+### **New version: v0.2.2**
- v0.2.2:
- Updated the Colab Notebooks for training and inference
@@ -96,14 +98,13 @@ Previous additions:
- Many small improvements and many bug fixes
-
-
## Requirements
**Compatible with Python 3.8 to 3.10.**
Requires **[napari]**, **[PyTorch]** and **[MONAI]**.
Compatible with Windows, MacOS and Linux.
-Installation should not take more than 30 minutes, depending on your internet connection.
+Installation of the plugin itself should not take more than 30 minutes, depending on your internet connection,
+and whether you already have Python and a package manager installed.
For PyTorch, please see [the PyTorch website for installation instructions].
@@ -111,6 +112,8 @@ A CUDA-capable GPU is not needed but very strongly recommended, especially for t
If you get errors from MONAI regarding missing readers, please see [MONAI's optional dependencies] page for instructions on getting the readers required by your images.
+Please reach out if you have any issues with the installation, we will be happy to help!
+
### Install note for ARM64 (Silicon) Mac users
To avoid issues when installing on the ARM64 architecture, please follow these steps.
@@ -187,18 +190,27 @@ Distributed under the terms of the [MIT] license.
## Citation
```
-@article {Achard2024,
- author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B. and Pages, Stephane and Mathis, Mackenzie W.},
- title = {CellSeg3D: self-supervised 3D cell segmentation for microscopy},
- elocation-id = {2024.05.17.594691},
- year = {2024},
- doi = {10.1101/2024.05.17.594691},
- publisher = {Cold Spring Harbor Laboratory},
- URL = {https://www.biorxiv.org/content/early/2024/05/17/2024.05.17.594691},
- eprint = {https://www.biorxiv.org/content/early/2024/05/17/2024.05.17.594691.full.pdf},
- journal = {bioRxiv}
+@article {10.7554/eLife.99848,
+article_type = {journal},
+title = {CellSeg3D, Self-supervised 3D cell segmentation for fluorescence microscopy},
+author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B and Pagès, Stéphane and Mathis, Mackenzie Weygandt},
+editor = {Cardona, Albert},
+volume = 13,
+year = 2025,
+month = {jun},
+pub_date = {2025-06-24},
+pages = {RP99848},
+citation = {eLife 2025;13:RP99848},
+doi = {10.7554/eLife.99848},
+url = {https://doi.org/10.7554/eLife.99848},
+abstract = {Understanding the complex three-dimensional structure of cells is crucial across many disciplines in biology and especially in neuroscience. Here, we introduce a set of models including a 3D transformer (SwinUNetR) and a novel 3D self-supervised learning method (WNet3D) designed to address the inherent complexity of generating 3D ground truth data and quantifying nuclei in 3D volumes. We developed a Python package called CellSeg3D that provides access to these models in Jupyter Notebooks and in a napari GUI plugin. Recognizing the scarcity of high-quality 3D ground truth data, we created a fully human-annotated mesoSPIM dataset to advance evaluation and benchmarking in the field. To assess model performance, we benchmarked our approach across four diverse datasets: the newly developed mesoSPIM dataset, a 3D platynereis-ISH-Nuclei confocal dataset, a separate 3D Platynereis-Nuclei light-sheet dataset, and a challenging and densely packed Mouse-Skull-Nuclei confocal dataset. We demonstrate that our self-supervised model, WNet3D – trained without any ground truth labels – achieves performance on par with state-of-the-art supervised methods, paving the way for broader applications in label-scarce biological contexts.},
+keywords = {self-supervised learning, artificial intelligence, neuroscience, mesoSPIM, confocal microscopy, platynereis},
+journal = {eLife},
+issn = {2050-084X},
+publisher = {eLife Sciences Publications, Ltd},
}
```
+
## Acknowledgements
This plugin was developed by originally Cyril Achard, Maxime Vidal, Mackenzie Mathis.
diff --git a/conda/napari_CellSeg3D_ARM64.yml b/conda/napari_CellSeg3D_ARM64.yml
index 49de0f12..de850061 100644
--- a/conda/napari_CellSeg3D_ARM64.yml
+++ b/conda/napari_CellSeg3D_ARM64.yml
@@ -18,7 +18,7 @@ dependencies:
- monai[nibabel, einops]>=0.9.0
- tqdm
- scikit-image
- - pyclesperanto-prototype
+ - pyclesperanto
- tqdm
- matplotlib
- napari_cellseg3d
diff --git a/docs/welcome.rst b/docs/welcome.rst
index 4f75c17b..652d880b 100644
--- a/docs/welcome.rst
+++ b/docs/welcome.rst
@@ -168,7 +168,7 @@ This plugin additionally uses the following libraries and software:
.. _PyTorch: https://pytorch.org/
.. _MONAI project: https://monai.io/
.. _on their website: https://docs.monai.io/en/stable/networks.html#nets
-.. _pyclEsperanto: https://github.com/clEsperanto/pyclesperanto_prototype
+.. _pyclEsperanto: https://github.com/clEsperanto/pyclesperanto
.. _WNet: https://arxiv.org/abs/1711.08506
.. rubric:: References
diff --git a/napari_cellseg3d/code_models/instance_segmentation.py b/napari_cellseg3d/code_models/instance_segmentation.py
index dbc903d3..754a996f 100644
--- a/napari_cellseg3d/code_models/instance_segmentation.py
+++ b/napari_cellseg3d/code_models/instance_segmentation.py
@@ -6,7 +6,7 @@
from typing import List
import numpy as np
-import pyclesperanto_prototype as cle
+import pyclesperanto as cle
from qtpy.QtWidgets import QWidget
from skimage.measure import label, regionprops
from skimage.morphology import remove_small_objects
@@ -287,7 +287,7 @@ def voronoi_otsu(
BASED ON CODE FROM : napari_pyclesperanto_assistant by Robert Haase
https://github.com/clEsperanto/napari_pyclesperanto_assistant
Original code at :
- https://github.com/clEsperanto/pyclesperanto_prototype/blob/master/pyclesperanto_prototype/_tier9/_voronoi_otsu_labeling.py.
+ https://github.com/clEsperanto/pyclesperanto/blob/d1990e28b1da44a7921890b7bd809d522d3198b8/pyclesperanto/_tier7.py#L409-L448.
Args:
volume (np.ndarray): volume to segment
diff --git a/napari_cellseg3d/dev_scripts/sliding_window_voronoi.py b/napari_cellseg3d/dev_scripts/sliding_window_voronoi.py
index e644cd8d..2585d900 100644
--- a/napari_cellseg3d/dev_scripts/sliding_window_voronoi.py
+++ b/napari_cellseg3d/dev_scripts/sliding_window_voronoi.py
@@ -1,6 +1,6 @@
"""Test script for sliding window Voronoi-Otsu segmentation.""."""
import numpy as np
-import pyclesperanto_prototype as cle
+import pyclesperanto as cle
from tqdm import tqdm
diff --git a/notebooks/Colab_inference_demo.ipynb b/notebooks/Colab_inference_demo.ipynb
index e5d3888e..ff673d4f 100644
--- a/notebooks/Colab_inference_demo.ipynb
+++ b/notebooks/Colab_inference_demo.ipynb
@@ -3,8 +3,8 @@
{
"cell_type": "markdown",
"metadata": {
- "id": "view-in-github",
- "colab_type": "text"
+ "colab_type": "text",
+ "id": "view-in-github"
},
"source": [
"
"
@@ -48,8 +48,8 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "id": "bnFKu6uFAm-z",
- "collapsed": true
+ "collapsed": true,
+ "id": "bnFKu6uFAm-z"
},
"outputs": [],
"source": [
@@ -151,28 +151,28 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"metadata": {
- "id": "O0jLRpARAm-0",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
+ "id": "O0jLRpARAm-0",
"outputId": "e4e8549c-7100-4c0c-bc30-505c0dfeb138"
},
"outputs": [
{
- "output_type": "execute_result",
"data": {
- "text/plain": [
- "'cupy backend (experimental)'"
- ],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
- }
+ },
+ "text/plain": [
+ "'cupy backend (experimental)'"
+ ]
},
+ "execution_count": 4,
"metadata": {},
- "execution_count": 4
+ "output_type": "execute_result"
}
],
"source": [
@@ -181,45 +181,50 @@
"inference_config = cs3d.CONFIG\n",
"post_process_config = cs3d.PostProcessConfig()\n",
"# select cle device for colab\n",
- "import pyclesperanto_prototype as cle\n",
- "cle.select_device(\"cupy\")"
+ "import pyclesperanto as cle\n",
+ "cle.select_device()"
]
},
{
"cell_type": "markdown",
- "source": [
- "### Select the pretrained model"
- ],
"metadata": {
"id": "b6vIW_oDlpok"
- }
+ },
+ "source": [
+ "### Select the pretrained model"
+ ]
},
{
"cell_type": "code",
- "source": [
- "model_selection = \"SwinUNetR\" #@param [\"SwinUNetR\", \"WNet3D\", \"SegResNet\"]\n",
- "print(f\"Selected model: {model_selection}\")"
- ],
+ "execution_count": 5,
"metadata": {
- "id": "5tkEI1q-loqB",
"colab": {
"base_uri": "https://localhost:8080/"
},
+ "id": "5tkEI1q-loqB",
"outputId": "d41875da-3879-4158-8a0f-6330afe442af"
},
- "execution_count": 5,
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Selected model: SwinUNetR\n"
]
}
+ ],
+ "source": [
+ "model_selection = \"SwinUNetR\" #@param [\"SwinUNetR\", \"WNet3D\", \"SegResNet\"]\n",
+ "print(f\"Selected model: {model_selection}\")"
]
},
{
"cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "id": "aPFS4WTdmPo3"
+ },
+ "outputs": [],
"source": [
"from napari_cellseg3d.config import ModelInfo\n",
"\n",
@@ -229,27 +234,22 @@
" num_classes=2,\n",
")\n",
"inference_config.model_info = model_info"
- ],
- "metadata": {
- "id": "aPFS4WTdmPo3"
- },
- "execution_count": 6,
- "outputs": []
+ ]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
- "id": "hIEKoyEGAm-0",
"colab": {
"base_uri": "https://localhost:8080/"
},
+ "id": "hIEKoyEGAm-0",
"outputId": "2103baf6-8875-433b-8799-41e0d1f3c7f0"
},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"--------------------\n",
"Parameters summary :\n",
@@ -265,23 +265,23 @@
]
},
{
- "output_type": "stream",
"name": "stderr",
+ "output_type": "stream",
"text": [
"monai.networks.nets.swin_unetr SwinUNETR.__init__:img_size: Argument `img_size` has been deprecated since version 1.3. It will be removed in version 1.5. The img_size argument is not required anymore and checks on the input size are run during forward().\n",
"INFO:napari_cellseg3d.utils:********************\n"
]
},
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Loading weights...\n"
]
},
{
- "output_type": "stream",
"name": "stderr",
+ "output_type": "stream",
"text": [
"INFO:napari_cellseg3d.utils:Downloading the model from HuggingFace https://huggingface.co/C-Achard/cellseg3d/resolve/main/SwinUNetR_latest.tar.gz....\n",
"270729216B [00:10, 26012663.01B/s] \n",
@@ -289,8 +289,8 @@
]
},
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
"Weights status : \n",
"Done\n",
@@ -325,16 +325,16 @@
"cell_type": "code",
"execution_count": 8,
"metadata": {
- "id": "IFbmZ3_zAm-1",
"colab": {
"base_uri": "https://localhost:8080/"
},
+ "id": "IFbmZ3_zAm-1",
"outputId": "bde6a6c5-f47f-4164-9e1c-3bf5a94dd00d"
},
"outputs": [
{
- "output_type": "stream",
"name": "stderr",
+ "output_type": "stream",
"text": [
"1it [00:00, 9.61it/s]\n",
"clesperanto's cupy / CUDA backend is experimental. Please use it with care. The following functions are known to cause issues in the CUDA backend:\n",
@@ -362,7 +362,6 @@
"cell_type": "code",
"execution_count": 9,
"metadata": {
- "id": "TMRiQ-m4Am-1",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 496,
@@ -376,29 +375,26 @@
"10441f745a6f41cf8655b2fafbb8204f"
]
},
+ "id": "TMRiQ-m4Am-1",
"outputId": "2d819126-5478-4d98-a5e2-ecacb7872465"
},
"outputs": [
{
- "output_type": "display_data",
"data": {
- "text/plain": [
- "interactive(children=(IntSlider(value=62, description='z', max=123), Output()), _dom_classes=('widget-interact…"
- ],
"application/vnd.jupyter.widget-view+json": {
+ "model_id": "7a72ee57e14c440bb2ce281da67e1311",
"version_major": 2,
- "version_minor": 0,
- "model_id": "7a72ee57e14c440bb2ce281da67e1311"
- }
+ "version_minor": 0
+ },
+ "text/plain": [
+ "interactive(children=(IntSlider(value=62, description='z', max=123), Output()), _dom_classes=('widget-interact…"
+ ]
},
- "metadata": {}
+ "metadata": {},
+ "output_type": "display_data"
},
{
- "output_type": "execute_result",
"data": {
- "text/plain": [
- ""
- ],
"text/html": [
"