Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

22 changes: 7 additions & 15 deletions .github/workflows/python-app.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,34 +5,26 @@ name: Python application

on:
push:
branches: [ develop, v2.1 ]
branches: [ develop ]
pull_request:
branches: [ develop, v2.1 ]
branches: [ develop ]

jobs:
build:

strategy:
matrix:
tfa: [true, false]


runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up Python 3.8
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.8
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest numpy
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi

- name: Conditionally install tfa
if: ${{ matrix.tfa }}
run: |
pip install tensorflow-addons
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
Expand Down
12 changes: 2 additions & 10 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,11 @@
**/__pycache__
deeptrack-app/*
*/datasets/*
paper-examples/models/*
*/models/*

build/*
dist/*
*.egg-info/
*/datasets/*
*/theory
_src/build/**/*

ParticleSizing
3DTracking
CellData
ParticleTracking
data/
datasets/
examples/**/*/models/
_src/build/**/*
175 changes: 92 additions & 83 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,140 +1,149 @@
<p align="center">
<img width="350" src=https://github.com/softmatterlab/DeepTrack-2.0/blob/develop/assets/logo.png?raw=true>
<img width="350" src=https://github.com/softmatterlab/DeepTrack-2.0/blob/master/assets/logo.png?raw=true>
</p>

DeepTrack is a comprehensive deep learning framework for digital microscopy.
We provide tools to create physical simulations of customizable optical systems, to generate and train neural network models, and to analyze experimental data.

If you use DeepTrack 2.1 in your project, please cite our DeepTrack article:
DeepTrack is a comprehensive deep learning framework for digital microscopy.
We provide tools to create physical simulations of customizable optical systems, to generate and train neural network models, and to analyze experimental data.

If you use DeepTrack 2.0 in your project, please cite our DeepTrack 2.0 article:
```
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe.
"Quantitative Digital Microscopy with Deep Learning."
Applied Physics Reviews 8 (2021), 011310.
https://doi.org/10.1063/5.0034891
```

# Getting started

## Installation

DeepTrack 2.1 requires at least python 3.6.
DeepTrack 2.0 requires at least python 3.6

To install DeepTrack 2.1, open a terminal or command prompt and run:
To install DeepTrack 2.0, open a terminal or command prompt and run

pip install deeptrack

If you have a very recent version of python, you may need to install numpy _before_ DeepTrack. This is a known issue with scikit-image.

## Updating to 2.1 from 2.0

If you are already using DeepTrack 2.0 (pypi version 0.x.x), updating to DeepTrack 2.1 (pypi version 1.x.x) is painless. If you have followed deprecation warnings, no change to your code is needed. There are two breaking changes:

- The deprecated operator `+` to chain features has been removed. It is now only possible using the `>>` operator.
- The deprecated operator `**` to duplicate a feature has been removed. It is now only possible using the `^` operator.

If you notice any other changes in behavior, please report it to us in the issues tab.

## Learning DeepTrack 2.1
## Learning DeepTrack 2.0

Everybody learns in different ways! Depending on your preferences, and what you want to do with DeepTrack, you may want to check out one or more of these resources.

### Fundamentals

First, we have a very general walkthrough of [basic](https://softmatterlab.github.io/DeepTrack-2.0/basics.html) and [advanced](https://softmatterlab.github.io/DeepTrack-2.0/advanced.html) topics. This is a 5-10 minute read, that will get you well on your way to understand the unique interactions available in DeepTrack.

Similarly, you may find the [get-started notebooks](examples/get-started) a rewarding way to start learning DeepTrack

## Documentation

The detailed documentation of DeepTrack 2.1 is available at the following link: https://softmatterlab.github.io/DeepTrack-2.0/deeptrack.html

### DeepTrack 2.1 in action
### DeepTrack 2.0 in action

To see DeepTrack in action, we provide six well documented tutorial notebooks that create simulation pipelines and train models:

1. [deeptrack_introduction_tutorial](examples/tutorials/deeptrack_introduction_tutorial.ipynb) gives an overview of how to use DeepTrack 2.1.
2. [tracking_particle_cnn_tutorial](examples/tutorials/tracking_particle_cnn_tutorial.ipynb) demonstrates how to track a point particle with a convolutional neural network (CNN).
3. [tracking_multiple_particles_unet_tutorial](examples/tutorials/tracking_multiple_particles_unet_tutorial.ipynb) demonstrates how to track multiple particles using a U-net.
4. [characterizing_aberrations_tutorial](examples/tutorials/characterizing_aberrations_tutorial.ipynb) demonstrates how to add and characterize aberrations of an optical device.
5. [distinguishing_particles_in_brightfield_tutorial](examples/tutorials/distinguishing_particles_in_brightfield_tutorial.ipynb) demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy.
6. [analyzing_video_tutorial](examples/tutorials/analyzing_video_tutorial.ipynb) demonstrates how to create videos and how to train a neural network to analyze them.
1. [deeptrack_introduction_tutorial](tutorials/deeptrack_introduction_tutorial.ipynb) gives an overview of how to use DeepTrack 2.0.
2. [tracking_particle_cnn_tutorial](tutorials/tracking_particle_cnn_tutorial.ipynb) demonstrates how to track a point particle with a convolutional neural network (CNN).
3. [tracking_multiple_particles_unet_tutorial](tutorials/tracking_multiple_particles_unet_tutorial.ipynb) demonstrates how to track multiple particles using a U-net.
4. [characterizing_aberrations_tutorial](tutorials/characterizing_aberrations_tutorial.ipynb) demonstrates how to add and characterize aberrations of an optical device.
5. [distinguishing_particles_in_brightfield_tutorial](tutorials/distinguishing_particles_in_brightfield_tutorial.ipynb) demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy.
6. [analyzing_video_tutorial](tutorials/analyzing_video_tutorial.ipynb) demonstrates how to create videos and how to train a neural network to analyze them.

Additionally, we have seven more case studies which are less documented, but gives additional insight in how to use DeepTrack with real datasets

1. [MNIST](examples/paper-examples/1_MNIST.ipynb) classifies handwritted digits.
2. [single particle tracking](examples/paper-examples/2-single_particle_tracking.ipynb) tracks experimentally captured videos of a single particle. (Requires opencv-python compiled with ffmpeg to open and read a video.)
3. [single particle sizing](examples/paper-examples/3-particle_sizing.ipynb) extracts the radius and refractive index of particles.
4. [multi-particle tracking](examples/paper-examples/4-multi-molecule-tracking.ipynb) detects quantum dots in a low SNR image.
5. [3-dimensional tracking](examples/paper-examples/5-inline_holography_3d_tracking.ipynb) tracks particles in three dimensions.
6. [cell counting](examples/paper-examples/6-cell_counting.ipynb) counts the number of cells in fluorescence images.
7. [GAN image generation](examples/paper-examples/7-GAN_image_generation.ipynb) uses a GAN to create cell image from masks.

### Model-specific examples

We also have examples that are specific for certain models. This includes
- [*LodeSTAR*](examples/LodeSTAR) for label-free particle tracking.
- [*MAGIK*](examples/MAGIK) for graph-based particle linking a trace characterization.
1. [![](https://colab.research.google.com/assets/colab-badge.svg) MNIST](https://colab.research.google.com/drive/1dRehGzf9DNpz7Jo2dw4U6vSyE4STZgpF?usp=sharing) classifies handwritted digits.
2. [![](https://colab.research.google.com/assets/colab-badge.svg) single particle tracking](https://colab.research.google.com/drive/1rh46w8TuJDF0mnvLpo6dlWkLiLr7MmQ9?usp=sharing) tracks experimentally captured videos of a single particle. (Requires opencv-python compiled with ffmpeg to open and read a video.)
3. [![](https://colab.research.google.com/assets/colab-badge.svg) single particle sizing](https://colab.research.google.com/drive/1U12f3m3oLKCGp-BAERGrwjMhEZdkWvT5?usp=sharing) extracts the radius and refractive index of particles.
4. [![](https://colab.research.google.com/assets/colab-badge.svg) multi-particle tracking](https://colab.research.google.com/drive/1TpNZ6ytoDXSZvGDFAFWrSjNs4SXGZmBw?usp=sharing) detects quantum dots in a low SNR image.
5. [![](https://colab.research.google.com/assets/colab-badge.svg) 3-dimensional tracking](https://colab.research.google.com/drive/1QJXPxsVeDt1ZW1685D5VANsME69s3mqi?usp=sharing) tracks particles in three dimensions.
6. [![](https://colab.research.google.com/assets/colab-badge.svg) cell counting](https://colab.research.google.com/drive/1C2Gn1Ym8etycOYW9yfDB_WiKlEeyvLtp?usp=sharing) counts the number of cells in fluorescence images.
7. [![](https://colab.research.google.com/assets/colab-badge.svg) GAN image generation](https://colab.research.google.com/drive/1rfFbeE-qkg3PxHBEa_r7Q9wXq0vdueEC?usp=sharing) uses a GAN to create cell image from masks.

### Video Tutorials

Videos are currently being updated to match with the current version of DeepTrack.
DeepTrack 2.0 introduction tutorial video: https://youtu.be/hyfaxF8q6VE
<a href="http://www.youtube.com/watch?feature=player_embedded&v=hyfaxF8q6VE
" target="_blank"><img src="https://img.youtube.com/vi/hyfaxF8q6VE/maxresdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 recognizing handwritten digits tutorial video: https://youtu.be/QD9JUXyLJpc
<a href="http://www.youtube.com/watch?feature=player_embedded&v=QD9JUXyLJpc
" target="_blank"><img src="https://img.youtube.com/vi/QD9JUXyLJpc/maxresdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 single particle tracking tutorial video: https://youtu.be/6Cntik6AfBI
<a href="http://www.youtube.com/watch?feature=player_embedded&v=6Cntik6AfBI
" target="_blank"><img src="https://img.youtube.com/vi/6Cntik6AfBI/maxresdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 single-particle characterization tutorial video: https://youtu.be/ia2H1QO1cHg
<a href="http://www.youtube.com/watch?feature=player_embedded&v=ia2H1QO1cHg
" target="_blank"><img src="https://img.youtube.com/vi/ia2H1QO1cHg/maxresdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 multiple particle tracking tutorial video: https://youtu.be/wFV2VqzpeZs
<a href="http://www.youtube.com/watch?feature=player_embedded&v=wFV2VqzpeZs
" target="_blank"><img src="https://img.youtube.com/vi/wFV2VqzpeZs/maxresdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 multiple particle tracking in 3D tutorial video: https://youtu.be/fzD1QIEIJ04
<a href="http://www.youtube.com/watch?feature=player_embedded&v=fzD1QIEIJ04
" target="_blank"><img src="https://img.youtube.com/vi/fzD1QIEIJ04/mqdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 cell counting tutorial video: https://youtu.be/C6hu_IYoWtI
<a href="https://www.youtube.com/watch?feature=player_embedded&v=C6hu_IYoWtI
" target="_blank"><img src="https://img.youtube.com/vi/C6hu_IYoWtI/mqdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

DeepTrack 2.0 GAN image generation tutorial video: https://youtu.be/8g44Yks7cis
<a href="https://www.youtube.com/watch?feature=player_embedded&v=8g44Yks7cis
" target="_blank"><img src="https://img.youtube.com/vi/8g44Yks7cis/mqdefault.jpg"
alt="Tutorial" width="384" height="216" border="10" /></a>

### In-depth dives

The examples folder contains notebooks which explains the different modules in more detail. These can be read in any order, but we provide a recommended order where more fundamental topics are introduced early.
This order is as follows:

1. [features_example](examples/module-examples/features_example.ipynb)
2. [properties_example](examples/module-examples/properties_example.ipynb)
3. [scatterers_example](examples/module-examples/scatterers_example.ipynb)
4. [optics_example](examples/module-examples/optics_example.ipynb)
5. [aberrations_example](examples/module-examples/aberrations_example.ipynb)
6. [noises_example](examples/module-examples/noises_example.ipynb)
7. [augmentations_example](examples/module-examples/augmentations_example.ipynb)
8. [image_example](examples/module-examples/image_example.ipynb)
9. [generators_example](examples/module-examples/generators_example.ipynb)
10. [models_example](examples/module-examples/models_example.ipynb)
11. [losses_example](examples/module-examples/losses_example.ipynb)
12. [utils_example](examples/module-examples/utils_example.ipynb)
13. [sequences_example](examples/module-examples/sequences_example.ipynb)
14. [math_example](examples/module-examples/math_example.ipynb)
1. [features_example](examples/features_example.ipynb)
2. [properties_example](examples/properties_example.ipynb)
3. [scatterers_example](examples/scatterers_example.ipynb)
4. [optics_example](examples/optics_example.ipynb)
5. [aberrations_example](examples/aberrations_example.ipynb)
6. [noises_example](examples/noises_example.ipynb)
7. [augmentations_example](examples/augmentations_example.ipynb)
6. [image_example](examples/image_example.ipynb)
7. [generators_example](examples/generators_example.ipynb)
8. [models_example](examples/models_example.ipynb)
10. [losses_example](examples/losses_example.ipynb)
11. [utils_example](examples/utils_example.ipynb)
12. [sequences_example](examples/sequences_example.ipynb)
13. [math_example](examples/math_example.ipynb)

## Graphical user interface

DeepTrack 2.0 provides a completely stand-alone [graphical user interface](https://github.com/softmatterlab/DeepTrack-2.0-app), which delivers all the power of DeepTrack without requiring programming knowledge.

## Cite us!
[![InterfaceDemo](https://i.imgur.com/lTy2vhz.gif)](https://i.imgur.com/lTy2vhz.gif)

## Documentation

If you use DeepTrack 2.1 in your project, please cite us here:
The detailed documentation of DeepTrack 2.0 is available at the following link: https://softmatterlab.github.io/DeepTrack-2.0/deeptrack.html

## Cite us!

If you use DeepTrack 2.0 in your project, please cite us here:
```
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe.
"Quantitative Digital Microscopy with Deep Learning."
Applied Physics Reviews 8 (2021), 011310.
https://doi.org/10.1063/5.0034891
```

See also:

<https://arxiv.org/abs/2202.06355>:
```
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe,1 and Carlo Manzo
"Geometric deep learning reveals the spatiotemporal fingerprint ofmicroscopic motion."
arXiv 2202.06355 (2022).
Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"Digital video microscopy enhanced by deep learning."
Optica 6.4 (2019): 506-513.
https://doi.org/10.1364/OPTICA.6.000506
```

<https://doi.org/10.1364/OPTICA.6.000506>:
```
Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"Digital video microscopy enhanced by deep learning."
Optica 6.4 (2019): 506-513.
```

<https://github.com/softmatterlab/DeepTrack.git>:
```
Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"DeepTrack." (2019)
https://github.com/softmatterlab/DeepTrack.git
```

## Funding

This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511) and the ERC Starting Grant MAPEI (101001267).
This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511).
7 changes: 7 additions & 0 deletions _src/source/augmentations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,10 @@ PadToMultiplesOf
:members:
:exclude-members: get

PreLoad
^^^^^^^

.. autoclass:: deeptrack.augmentations.PreLoad
:members:
:exclude-members: get

Loading