Skip to content

Commit

Permalink
Minor touch-up updates to torch-em docs (#241)
Browse files Browse the repository at this point in the history
Update docs
  • Loading branch information
anwai98 committed Apr 28, 2024
1 parent b15a1af commit 078cb73
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 16 deletions.
32 changes: 21 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5108853.svg)](https://doi.org/10.5281/zenodo.5108853)
[![Anaconda-Server Badge](https://anaconda.org/conda-forge/torch_em/badges/version.svg)](https://anaconda.org/conda-forge/torch_em)

# Torch'em
# torch-em

Deep-learning based semantic and instance segmentation for 3D Electron Microscopy and other bioimage analysis problems based on pytorch.
Deep-learning based semantic and instance segmentation for 3D Electron Microscopy and other bioimage analysis problems based on PyTorch.
Any feedback is highly appreciated, just open an issue!

Highlights:
Expand Down Expand Up @@ -85,34 +85,38 @@ For a more in-depth example, check out one of the example notebooks:

## Installation

### From conda
### From mamba

[mamba](https://mamba.readthedocs.io/en/latest/) is a drop-in replacement for conda, but much faster. While the steps below may also work with `conda`, it's highly recommended using `mamba`. You can follow the instructions [here](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to install `mamba`.

You can install `torch_em` from conda-forge:
```
conda install -c conda-forge torch_em
mamba install -c conda-forge torch_em
```
Please check out [pytorch.org](https://pytorch.org/) for more information on how to install a pytorch version compatible with your system.
Please check out [pytorch.org](https://pytorch.org/) for more information on how to install a PyTorch version compatible with your system.

### From source

It's recommmended to set up a conda environment for using `torch_em`.
Two conda environment files are provided: `environment_cpu.yaml` for a pure cpu set-up and `environment_gpu.yaml` for a gpu set-up.
If you want to use the gpu version, make sure to set the correct cuda version for your system in the environment file, by modifiying [this-line](https://github.com/constantinpape/torch-em/blob/main/environment_gpu.yaml#L9).
Two conda environment files are provided: `environment_cpu.yaml` for a pure CPU set-up and `environment_gpu.yaml` for a GPU set-up.
If you want to use the GPU version, make sure to set the correct CUDA version for your system in the environment file, by modifiying [this-line](https://github.com/constantinpape/torch-em/blob/main/environment_gpu.yaml#L9).

You can set up a conda environment using one of these files like this:
```sh
conda env create -f <ENV>.yaml -n <ENV_NAME>
conda activate <ENV_NAME>
```bash
mamba create -f <ENV>.yaml -n <ENV_NAME>
mamba activate <ENV_NAME>
pip install -e .
```
where <ENV>.yaml is either `environment_cpu.yaml` or `environment_gpu.yaml`.
where `<ENV>.yaml` is either `environment_cpu.yaml` or `environment_gpu.yaml`.


## Features

- Training of [2d U-Nets](https://doi.org/10.1007/978-3-319-24574-4_28) and [3d U-Nets](https://doi.org/10.1007/978-3-319-46723-8_49) for various segmentation tasks.
- Random forest based domain adaptation from [Shallow2Deep](https://doi.org/10.1101/2021.11.09.467925)
- Training models for embedding prediction with sparse instance labels from [SPOCO](https://arxiv.org/abs/2103.14572)
- Training of [UNETR](https://doi.org/10.48550/arXiv.2103.10504) for various 2d segmentation tasks, with a flexible choice of vision transformer backbone from [Segment Anything](https://doi.org/10.48550/arXiv.2304.02643) or [Masked Autoencoder](https://doi.org/10.48550/arXiv.2111.06377).
- Training of [ViM-UNet](https://doi.org/10.48550/arXiv.2404.07705) for various 2d segmentation tasks.


## Command Line Scripts
Expand All @@ -128,3 +132,9 @@ For more details run `<COMMAND> -h` for any of these commands.
The folder [scripts/cli](https://github.com/constantinpape/torch-em/tree/main/scripts/cli) contains some examples for how to use the CLI.

Note: this functionality was recently added and is not fully tested.

## Research Projects using `torch-em`

- [Probabilistic Domain Adaptation for Biomedical Image Segmentation](https://doi.org/10.48550/arXiv.2303.11790)
- [Segment Anything for Microscopy](https://doi.org/10.1101/2023.08.21.554208)
- [ViM-UNet: Vision Mamba for Biomedical Segmentation](https://doi.org/10.48550/arXiv.2404.07705)
12 changes: 7 additions & 5 deletions experiments/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Experiments

Training and evaluation of neural networks for biomedical image analysis with `torch_em`.
The subfolders `unet_segmentation`, `shallow2deep`, `spoco` and `probabilistic_domain_adaptation` contain code for different methods.
The subfolders `unet-segmentation`, `shallow2deep`, `spoco`, `probabilistic_domain_adaptation`, `vision-transformer` and `vision-mamba` contains scripts for different methods.

The best entrypoints for training a model yourself are the notebooks:
- `2D-UNet-Training`: train a 2d UNet for segmentation tasks, [available on google colab](https://colab.research.google.com/github/constantinpape/torch-em/blob/main/experiments/2D-UNet-Training.ipynb).
- `3D-UNet-Training`: train a 3d UNet for segmentation tasks, [available on google colab](https://colab.research.google.com/github/constantinpape/torch-em/blob/main/experiments/3D-UNet-Training.ipynb).

## unet_segmentation
## unet-segmentation

This folder contains several experiments for training 2d or 3d U-Nets () for segmentation tasks.
Most of these models are available on [BioImage.IO](https://bioimage.io/#/).
Expand All @@ -19,17 +19,19 @@ If you encounter an issue with one of these experiments please open an issue!

Experiments for the re-implementation of [From Shallow to Deep: Exploiting Feature-Based Classifiers for Domain Adaptation in Semantic Segmentation](https://doi.org/10.3389/fcomp.2022.805166). The code here was used to train the models for the [ilastik Trainable Domain Adaptation Workflow](https://www.ilastik.org/documentation/tda/tda).


## spoco

Experiments for the re-implementation of [Sparse Object-Level Supervision for Instance Segmentation With Pixel Embeddings](https://openaccess.thecvf.com/content/CVPR2022/html/Wolny_Sparse_Object-Level_Supervision_for_Instance_Segmentation_With_Pixel_Embeddings_CVPR_2022_paper.html). Work in progress.


## probabilistic_domain_adaptation

Experiments for the re-implementation of [Probabilistic Domain Adaptation for Biomedical Image Segmentation](https://arxiv.org/abs/2303.11790). Work in progress.


## vision-transformer

WIP
Work in progress.

## vision-mamba

Experiments for [ViM-UNet: Vision Mamba for Biomedical Segmentation](https://doi.org/10.48550/arXiv.2404.07705).

0 comments on commit 078cb73

Please sign in to comment.