From 078cb73cdb5fae1b07254f19fea4321ba3e02b99 Mon Sep 17 00:00:00 2001 From: Anwai Archit <52396323+anwai98@users.noreply.github.com> Date: Sun, 28 Apr 2024 15:27:16 +0200 Subject: [PATCH] Minor touch-up updates to torch-em docs (#241) Update docs --- README.md | 32 +++++++++++++++++++++----------- experiments/README.md | 12 +++++++----- 2 files changed, 28 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index d683479a..a968e4db 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,9 @@ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5108853.svg)](https://doi.org/10.5281/zenodo.5108853) [![Anaconda-Server Badge](https://anaconda.org/conda-forge/torch_em/badges/version.svg)](https://anaconda.org/conda-forge/torch_em) -# Torch'em +# torch-em -Deep-learning based semantic and instance segmentation for 3D Electron Microscopy and other bioimage analysis problems based on pytorch. +Deep-learning based semantic and instance segmentation for 3D Electron Microscopy and other bioimage analysis problems based on PyTorch. Any feedback is highly appreciated, just open an issue! Highlights: @@ -85,27 +85,29 @@ For a more in-depth example, check out one of the example notebooks: ## Installation -### From conda +### From mamba + +[mamba](https://mamba.readthedocs.io/en/latest/) is a drop-in replacement for conda, but much faster. While the steps below may also work with `conda`, it's highly recommended using `mamba`. You can follow the instructions [here](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to install `mamba`. You can install `torch_em` from conda-forge: ``` -conda install -c conda-forge torch_em +mamba install -c conda-forge torch_em ``` -Please check out [pytorch.org](https://pytorch.org/) for more information on how to install a pytorch version compatible with your system. +Please check out [pytorch.org](https://pytorch.org/) for more information on how to install a PyTorch version compatible with your system. ### From source It's recommmended to set up a conda environment for using `torch_em`. -Two conda environment files are provided: `environment_cpu.yaml` for a pure cpu set-up and `environment_gpu.yaml` for a gpu set-up. -If you want to use the gpu version, make sure to set the correct cuda version for your system in the environment file, by modifiying [this-line](https://github.com/constantinpape/torch-em/blob/main/environment_gpu.yaml#L9). +Two conda environment files are provided: `environment_cpu.yaml` for a pure CPU set-up and `environment_gpu.yaml` for a GPU set-up. +If you want to use the GPU version, make sure to set the correct CUDA version for your system in the environment file, by modifiying [this-line](https://github.com/constantinpape/torch-em/blob/main/environment_gpu.yaml#L9). You can set up a conda environment using one of these files like this: -```sh -conda env create -f .yaml -n -conda activate +```bash +mamba create -f .yaml -n +mamba activate pip install -e . ``` -where .yaml is either `environment_cpu.yaml` or `environment_gpu.yaml`. +where `.yaml` is either `environment_cpu.yaml` or `environment_gpu.yaml`. ## Features @@ -113,6 +115,8 @@ where .yaml is either `environment_cpu.yaml` or `environment_gpu.yaml`. - Training of [2d U-Nets](https://doi.org/10.1007/978-3-319-24574-4_28) and [3d U-Nets](https://doi.org/10.1007/978-3-319-46723-8_49) for various segmentation tasks. - Random forest based domain adaptation from [Shallow2Deep](https://doi.org/10.1101/2021.11.09.467925) - Training models for embedding prediction with sparse instance labels from [SPOCO](https://arxiv.org/abs/2103.14572) +- Training of [UNETR](https://doi.org/10.48550/arXiv.2103.10504) for various 2d segmentation tasks, with a flexible choice of vision transformer backbone from [Segment Anything](https://doi.org/10.48550/arXiv.2304.02643) or [Masked Autoencoder](https://doi.org/10.48550/arXiv.2111.06377). +- Training of [ViM-UNet](https://doi.org/10.48550/arXiv.2404.07705) for various 2d segmentation tasks. ## Command Line Scripts @@ -128,3 +132,9 @@ For more details run ` -h` for any of these commands. The folder [scripts/cli](https://github.com/constantinpape/torch-em/tree/main/scripts/cli) contains some examples for how to use the CLI. Note: this functionality was recently added and is not fully tested. + +## Research Projects using `torch-em` + +- [Probabilistic Domain Adaptation for Biomedical Image Segmentation](https://doi.org/10.48550/arXiv.2303.11790) +- [Segment Anything for Microscopy](https://doi.org/10.1101/2023.08.21.554208) +- [ViM-UNet: Vision Mamba for Biomedical Segmentation](https://doi.org/10.48550/arXiv.2404.07705) diff --git a/experiments/README.md b/experiments/README.md index 9750d275..eee5ba34 100644 --- a/experiments/README.md +++ b/experiments/README.md @@ -1,13 +1,13 @@ # Experiments Training and evaluation of neural networks for biomedical image analysis with `torch_em`. -The subfolders `unet_segmentation`, `shallow2deep`, `spoco` and `probabilistic_domain_adaptation` contain code for different methods. +The subfolders `unet-segmentation`, `shallow2deep`, `spoco`, `probabilistic_domain_adaptation`, `vision-transformer` and `vision-mamba` contains scripts for different methods. The best entrypoints for training a model yourself are the notebooks: - `2D-UNet-Training`: train a 2d UNet for segmentation tasks, [available on google colab](https://colab.research.google.com/github/constantinpape/torch-em/blob/main/experiments/2D-UNet-Training.ipynb). - `3D-UNet-Training`: train a 3d UNet for segmentation tasks, [available on google colab](https://colab.research.google.com/github/constantinpape/torch-em/blob/main/experiments/3D-UNet-Training.ipynb). -## unet_segmentation +## unet-segmentation This folder contains several experiments for training 2d or 3d U-Nets () for segmentation tasks. Most of these models are available on [BioImage.IO](https://bioimage.io/#/). @@ -19,12 +19,10 @@ If you encounter an issue with one of these experiments please open an issue! Experiments for the re-implementation of [From Shallow to Deep: Exploiting Feature-Based Classifiers for Domain Adaptation in Semantic Segmentation](https://doi.org/10.3389/fcomp.2022.805166). The code here was used to train the models for the [ilastik Trainable Domain Adaptation Workflow](https://www.ilastik.org/documentation/tda/tda). - ## spoco Experiments for the re-implementation of [Sparse Object-Level Supervision for Instance Segmentation With Pixel Embeddings](https://openaccess.thecvf.com/content/CVPR2022/html/Wolny_Sparse_Object-Level_Supervision_for_Instance_Segmentation_With_Pixel_Embeddings_CVPR_2022_paper.html). Work in progress. - ## probabilistic_domain_adaptation Experiments for the re-implementation of [Probabilistic Domain Adaptation for Biomedical Image Segmentation](https://arxiv.org/abs/2303.11790). Work in progress. @@ -32,4 +30,8 @@ Experiments for the re-implementation of [Probabilistic Domain Adaptation for Bi ## vision-transformer -WIP +Work in progress. + +## vision-mamba + +Experiments for [ViM-UNet: Vision Mamba for Biomedical Segmentation](https://doi.org/10.48550/arXiv.2404.07705).