A Napari plugin for pre-defined manual segmentation or semi-automatic segmentation with a one-shot learning procedure. The objective was to simplify the interface as much as possible so that the user can concentrate on annotation tasks using a pen on a tablet, or a mouse on a computer.
This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.
- Installation and Usage
- Hesperos: Manual Segmentation and Correction mode
- Hesperos: OneShot Segmentation mode
The Hesperos plugin is designed to run on Windows (11 or less) and MacOS with Python 3.8 / 3.9 / 3.10.
- Install Anaconda and unselect Add to PATH. Keep in mind the path where you choose to install anaconda.
- Only download the script_files folder for Windows or Macos.
- Add your Anaconda path in these script files:
- For Windows:
Right click on the .bat files (for installation and running) and select Modify. Change PATH_TO_ADD with your Anaconda path. Then save the changes.
for exemple:
anaconda_dir=C:\Users\chgodard\anaconda3
- For Macos:
- Right click on the .command files (for installation and running) and select Open with TextEdit. Change PATH_TO_ADD with your Anaconda path. Then save the changes.
for exemple:
source ~/opt/anaconda3/etc/profile.d/conda.sh
- In your terminal, change the permissions to allow the following .command files to be run (change PATH with the path of your .command files):
chmod u+x PATH/install_hesperos_env.command chmod u+x PATH/run_hesperos.command
- Right click on the .command files (for installation and running) and select Open with TextEdit. Change PATH_TO_ADD with your Anaconda path. Then save the changes.
- For Windows:
Right click on the .bat files (for installation and running) and select Modify. Change PATH_TO_ADD with your Anaconda path. Then save the changes.
- Double click on the install_hesperos_env file to create a virtual environment in Anaconda with python 3.9 and Napari 0.4.14.
/!\ The Hesperos plugin is not yet compatible with Napari versions superior to 0.4.14.
- Double click on the run_hesperos file to run Napari from your virtual environment.
- In Napari:
- Go to Plugins/Install Plugins...
- Search for "hesperos" (it can take a while to load).
- Install the hesperos plugin.
- When the installation is done, close Napari. A restart of Napari is required to finish the plugin installation.
- Double click on the run_hesperos file to run Napari.
- In Napari, use the Hesperos plugin with Plugins/hesperos.
- Install Anaconda and unselect Add to PATH.
- Open your Anaconda prompt command.
- Create a virtual environment with Python 3.8 / 3.9 / 3.10:
conda create -n hesperos_env python=3.9
- Install the required Python packages in your virtual environment:
conda activate hesperos_env pip install napari==0.4.14 conda install -c anaconda pyqt pip install vispy==0.9.6 pip install hesperos
/!\ Hesperos plugin is not yet compatible with napari version superior to 0.4.14.
- Launch Napari:
napari
- Double click on the run_hesperos file to run Napari.
- In Napari:
- Go to Plugins/Install Plugins...
- Search for "hesperos" (it can take a while to load).
- Click on Update if a new version of Hesperos has been found. You can check the latest version of Hesperos in the Napari Hub.
- When the installation is done, close Napari. A restart of Napari is required to finish the plugin installation.
The Manual Segmentation and Correction mode of the Hesperos plugin is a simplified and optimized interface to do basic 2D manual segmentation of several structures in a 3D image using a mouse or a stylet with a tablet.
The Hesperos plugin can be used with Digital Imaging and COmmunications in Medicine (DICOM), Neuroimaging Informatics Technology Initiative (NIfTI) or Tagged Image File Format (TIFF) images. To improve performances, use images that are located on your own disk.
- To import data:
- After the image has loaded, a slider appears that allows to zoom in/out: . Zooming is also possible with the button in the layer controls panel.
- If your data is a DICOM serie, you have the possibility to directly change the contrast of the image (according to the Hounsfield Unit):
- In the bottom left corner of the application you also have the possibility to:
When data is loading, two layers are created: the image
layer and the annotations
layer. Order in the layer list correspond to the overlayed order. By clicking on these layers you will have acces to different layer controls (at the top left corner of the application). All actions can be undone/redone with the Ctrl-Z/Shift-Ctrl-Z keyboard shortcuts. You can also hide a layer by clicking on its eye icon on the layer list.
For the image layer:
opacity
: a slider to control the global opacity of the layer.contrast limits
: a double slider to manually control the contrast of the image (same as the option for DICOM data).
For the annotations layer:
- : erase brush to erase all labels at once (if
preserve labels
is not selected) or only erase the selected label (ifpreserve labels
is selected). - : paint brush with the same color than the
label
rectangle. - : fill bucket with the same color than the
label
rectangle. - : select to zoom in and out with the mouse wheel (same as the zoom slider at the top right corner in Panel 1).
label
: a colored rectangle to represent the selected label.opacity
: a slider to control the global opacity of the layer.brush size limits
: a slider to control size of the paint/erase brush.preserve labels
: if selected, all actions are applied only on the selected label (see thelabel
rectangle); if not selected, actions are applied on all labels.show selected
: if selected, only the selected label will be display on the layer; if not selected, all labels are displayed.
Remark: a second option for filling has been added
- Drawn the egde of a closed shape with the paint brush mode.
- Double click to activate the fill bucket.
- Click inside the closed area to fill it.
- Double click on the filled area to deactivate the fill bucket and reactivate the paint brush mode.
For the orientations and landmarks layers:
opacity
: a slider to control the global opacity of the layer.
Manual annotation and correction on the segmented file is done using the layer controls of the annotations
layer. Click on the layer to display them. /!\ You have to choose a structure to start annotating (see 2.).
-
To modify an existing segmentation, you can directy open the segmented file with the button. The file needs to have the same dimensions as the original image.
/!\ Only .tiff, .tif, .nii and .nii.gz files are supported as segmented files.
-
Choose a structure to annotate in the drop-down menu
Feta Challenge
: to annotate fetal brain MRI with the same label than the FeTA Challenge.Fetus
: to annotate pregnancy image.Larva
: to annotate drosophila larva image according to the anatomy described in Schoborg et al. 2019.Mouse Embryon
: to annotate HREM or MicroCT of mouse embryons.Shoulder
: to annotate bones and muscles for shoulder surgery.Shoulder Bones
: to annotate only few bones for shoulder surgery.
When selecting a structure, a new panel appears with a list of elements to annotate. Each element has its own label and color. Select one element in the list to automatically activate the paint brush mode with the corresponding color (color is updated in the
label
rectangle in the layer controls panel).
-
If you need to work on a specific slice of your 3D image, but also have to explore the volume to understand some complex structures, you can use the locking option to facilitate the annotation task.
- To activate the functionality:
- To deactivate the functionality (or change the locked slice index):
A maxiumum of 10 slices can be selected in a 3D image and the corresponding indexes will be integrated in the metadata during the exportation of the segmentation file.
/!\ Metadata integration is available only for exported .tiff and .tif files and with the
Unique
save option.
- : to add the currently displayed slice index in the drop-down menu.
- : to remove the currently displayed slice index from the drop-down menu.
- : to go to the slice index selected in the drop-down menu. The icon will be checked when the currently displayed slice index matches the selected slice index in the drop-down menu.
- : a drop-down menu containing the list of slice selected indexes. Select an index from the list to work with it more easily.
- Annotations can be exported as .tif, .tiff, .nii or .nii.gz file with the button in one of the two following saving mode:
Unique
: segmented data is exported as a unique 3D image with corresponding label ids (1-2-3-...). This file can be re-opened in the application.Several
: segmented data is exported as several binary 3D images (0 or 255), one for each label id.
- : delete annotation data.
Automatic segmentation backup
: if selected, the segmentation data will be automatically exported as a unique 3D image when the image slice is changed./!\ This process can slow down the display if the image is large.
The OneShot Segmentation mode of the Hesperos plugin is a 2D version of the VoxelLearning method implemented in DIVA (see our Github and the latest article Guérinot, C., Marcon, V., Godard, C., et al. (2022). New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. Frontiers in Bioinformatics. doi:10.3389/fbinf.2021.777101).
The principle is to accelerate the segmentation without prior information. The procedure consists of:
- A rapid tagging of few pixels in the image with two labels: one for the structure of interest (named positive tags), and one for the other structures (named negative tags).
- A training of a simple random forest classifier with these tagged pixels and their features (mean, gaussian, ...).
- An inference of all the pixels of the image to automatically segment the structure of interest. The output is a probability image (0-255) of belonging to a specific class.
- Iterative corrections if needed.
Same panel as the Manual Segmentation and Correction mode (see panel 1 description).
Annotations and corrections on the segmented file is done using the layer controls of the annotations
layer. Click on the layer to display them. Only two labels are available: Structure of interest
and Other
.
The rapid manual tagging step of the one-shot learning method aims to learn and attribute different features to each label. To achieve that, the user has to:
- with the label
Structure of interest
, tag few pixels of the structure of interest. - with the label
Other
, tag the greatest diversity of uninteresting structures in the 3D image (avoid tagging too much pixels).
see the exemple image with
Structure of interest
label in red andOther
label in cyan.
- To modify an existing segmentation, you can directy open the segmented file with the button. The file needs to have the same dimensions as the original image.
/!\ Only .tiff, .tif, .nii and .nii.gz files are supported as segmented files.
- All actions can be undone with the button or Ctrl-Z.
From the previously tagged pixels, features are extracted and used to train a basic classifier : the Random Forest Classifier (RFC). When the training of the pixel classifier is done, it is applied to each pixel of the complete volume and outputs a probability to belong to the structure of interest.
To run training and inference, click on the button:
- You will be asked to save a .pckl file which corresponds to the model.
- A new status will appears under the Panel 4 :
Computing...
. You must wait for the message to change to:Ready
before doing anything in the application (otherwise the application may freeze or crash). - When the processing is done, two new layers will appear:
- the
probabilities
layer which corresponds to the direct probability (between 0 and 1) of a pixel to belong to the structure of interest. This layer is disabled by default, to enable it click on its eye icon in the layer list. - the
segmented probabilities
layer which corresponds to a binary image obtained from the probability image normed and thresholded according to a value manually defined with theProbability threshold
slider: .
- the
Remark: If the output is not perfect, you have two possibilities to improve the result:
- Add some tags with the paint brush to take in consideration unintersting structures or add information in critical areas of your structure of interest (such as in thin sections). Then, run the training and inference process again. /!\ This will overwrite all previous segmentation data.
- Export your segmentation data and re-open it with the Manual Annotation and Correction mode of Hesperos to manually erase or add annotations.
- Segmented probabilites can be exported as .tif, .tiff, .nii or .nii.gz file with the button. The image is exported as a unique 3D binary image (value 0 and 255). This file can be re-opened in the application for correction.
- Probabilities can be exported as .tif, .tiff, .nii or .nii.gz file with the button as a unique 3D image. The probabilities image is normed between 0 and 255.
- : delete annotation data.
Distributed under the terms of the BSD-3 license, Hesperos is a free and open source software.