Skip to content

Deep learning for cellular and sub-cellular segmentation made easy.

License

Notifications You must be signed in to change notification settings

BBQuercus/fluffy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fluffy

Fluffy

Reproducible deep learning based segmentation of biomedical images.

Overview

What is fluffy?

Fluffy is a simple browser based tool to use custom deep learning models for biomedical image segmentation. Some example images can be seen in the ./data/ directory named browser. As microscopy images are usually large files, a local docker container is used. In comparison to a standard web server this greatly reduces file transfer and speeds everything up.

Some key features available include:

  • A couple of models available
    • Nuclear segmentation
    • Cytoplasmic segmentation
    • Stress granule segmentation
    • ER segmentation
  • Single image viewing to check how good the models are
  • Batch processing to process multiple files at once

Additionally, all code is relying on well-maintained packages (numpy, tensorflow, scikit-image, flask).

Project organization

This repository:

    ├── LICENSE
    ├── README.md          <- This top-level README.
    ├── data/              <- Sample data to be displayed. For training data read below.
    ├── docs/              <- Home to the manual.
    └── web/               <- Fluffy interface. Flask application used in the docker container.

Aditionally:

Examples

  • Nuclear segmentation using the categorical model providing a class to separate nuclei. See here.
  • Granular segmentation illustrating the selectivity of the model. See here.

System requirements and installation

Go to DockerHub and find the latest version of fluffy.

# Replace with the latest version at hub.docker.com/r/bbquercus/fluffy  
docker pull bbquercus/fluffy:VERSION
docker run -p 5000:5000 bbquercus/fluffy:VERSION

Visit localhost:5000 in your browser of choice.

Data and model availability

Data is currently not available but all annotated images will be released after enough testing was performed. Pretrained models are automatically downloaded within the interface or can be accessed here.

Labeling and data preparation

Labeling is done in Fiji and data preparation using simple command line tools within a conda environment. Both processes are described in the extensive manual. Training must be done at one's own risk or by asking me. The training is also open sourced here.

Roadmap

  • Flask application for easy inferencing.
  • Separate training from inference. Fluffy will only remain for inference via the flask application.
  • Open sourcing of all training data.
  • Addition of spot detection (in collaboration with @zhanyinx).

Citation

If you find fluffy to be useful, please cite my repository:

@misc{Fluffy,
      author = {Bastian Th., Eichenberger},
      title = {Fluffy},
      year = {2020},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/bbquercus/fluffy}}