DeepCAD-RT: Real-time denoising of fluorescence time-lapse imaging using deep self-supervised learning
Among the challenges of fluorescence microscopy, poor imaging signal-to-noise ratio (SNR) caused by limited photon budget lingeringly stands in the central position. Fluorescence microscopy is inherently sensitive to detection noise because the photon flux in fluorescence imaging is far lower than that in photography. For almost all fluorescence imaging technologies, the inherent shot-noise limit determines the upper bound of imaging SNR and restricts the imaging resolution, speed, and sensitivity. To capture enough fluorescence photons for satisfactory SNR, researchers have to sacrifice imaging resolution, speed, and even sample health.
We present a versatile method DeepCAD-RT to denoise fluorescence time-lapse images with rapid processing speed that can be incorporated with the microscope acquisition system to achieve real-time denoising. Our method is based on deep self-supervised learning and the original low-SNR data can be directly used for training convolutional networks, making it particularly advantageous in functional imaging where the sample is undergoing fast dynamics and capturing ground-truth data is hard or impossible. We have demonstrated extensive experiments including calcium imaging in mice, zebrafish, and flies, cell migration observations, and the imaging of a new genetically encoded ATP sensor, covering both 2D single-plane imaging and 3D volumetric imaging. Qualitative and quantitative evaluations show that our method can substantially enhance fluorescence time-lapse imaging data and permit high-sensitivity imaging of biological dynamics beyond the shot-noise limit.
For more details, please see the companion paper where the method first appeared: "Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit, Nature Biotechnology (2022)".
Click to unfold the directory tree
DeepCAD-RT
|---DeepCAD_RT_pytorch #Pytorch implementation of DeepCAD-RT#
|---|---demo_train_pipeline.py
|---|---demo_test_pipeline.py
|---|---convert_pth_to_onnx.py
|---|---deepcad
|---|---|---__init__.py
|---|---|---utils.py
|---|---|---network.py
|---|---|---model_3DUnet.py
|---|---|---data_process.py
|---|---|---buildingblocks.py
|---|---|---test_collection.py
|---|---|---train_collection.py
|---|---|---movie_display.py
|---|---notebooks
|---|---|---demo_train_pipeline.ipynb
|---|---|---demo_test_pipeline.ipynb
|---|---|---DeepCAD_RT_demo_colab.ipynb
|---|---datasets
|---|---|---DataForPytorch # project_name #
|---|---|---|---data.tif
|---|---pth
|---|---|---ModelForPytorch
|---|---|---|---model.pth
|---|---|---|---model.yaml
|---|---onnx
|---|---|---ModelForPytorch
|---|---|---|---model.onnx
|---|---results
|---|---|--- # test results#
|---DeepCAD_RT_GUI #Matlab GUI of DeepCAD-RT#
- DeepCAD_RT_pytorch contains the Pytorch implementation of DeepCAD-RT (Python scripts, Jupyter notebooks, Colab notebook)
- DeepCAD_RT_GUI contains all C++ and Matlab files for the real-time implementation of DeepCAD-RT
π©2022/06/14: Enhanced data augmentation.
We replaced 12-fold data augmentation with 16-fold data augmentation for better performance.
Denoising performance (SNR) with the increase of training epochs on simulated calcium imaging data:
π©2023/07/26: Simplified testing pipeline: Only the model of the last training epoch will be tested by default.
π©2024/10/21: Optimized result data type: Make the result keep the same data type as the input.
- Ubuntu 16.04
- Python 3.9
- Pytorch 1.8.0
- NVIDIA GPU (GeForce RTX 3090) + CUDA (11.1)
-
Create a virtual environment and install PyTorch. In the 3rd step, please select the correct Pytorch version that matches your CUDA version from https://pytorch.org/get-started/previous-versions/.
$ conda create -n deepcadrt python=3.9 $ conda activate deepcadrt $ pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
Note:
pip install
command is required for Pytorch installation. -
We made a installable pip release of DeepCAD-RT [pypi]. You can install it by entering the following command:
$ pip install deepcad
$ git clone https://github.com/cabooster/DeepCAD-RT
$ cd DeepCAD-RT/DeepCAD_RT_pytorch/
To try out the Python code, please activate the deepcadrt
environment first:
$ source activate deepcadrt
$ cd DeepCAD-RT/DeepCAD_RT_pytorch/
Example training
To train a DeepCAD-RT model, we recommend starting with the demo script demo_train_pipeline.py
. One demo dataset will be downloaded to the DeepCAD_RT_pytorch/datasets
folder automatically. You can also download other data from the companion webpage or use your own data by changing the training parameter datasets_path
.
python demo_train_pipeline.py
Example testing
To test the denoising performance with pre-trained models, you can run the demo script demo_test_pipeline.py
. A demo dataset and its denoising model will be automatically downloaded to DeepCAD_RT_pytorch/datasets
and DeepCAD_RT_pytorch/pth
, respectively. You can change the dataset and the model by changing the parameters datasets_path
and denoise_model
.
python demo_test_pipeline.py
We provide simple and user-friendly Jupyter notebooks to implement DeepCAD-RT. They are in the DeepCAD_RT_pytorch/notebooks
folder. Before you launch the notebooks, please configure an environment following the instruction in Environment configuration . And then, you can launch the notebooks through the following commands:
$ source activate deepcadrt
$ cd DeepCAD-RT/DeepCAD_RT_pytorch/notebooks
$ jupyter notebook
We also provide a cloud-based notebook implemented with Google Colab. You can run DeepCAD-RT directly in your browser using a cloud GPU without configuring the environment.
Note: The Colab notebook needs much longer time to train and test because of the limited GPU performance offered by Colab.
To achieve real-time denoising, DeepCAD-RT was optimally deployed on GPU using TensorRT (Nvidia) for further acceleration and memory reduction. We also designed a sophisticated time schedule for multi-thread processing. Based on a two-photon microscope, real-time denoising has been achieved with our Matlab GUI of DeepCAD-RT (tested on a Windows desktop with Intel i9 CPU and 128 GB RAM). Tutorials on installing and using the GUI has been moved to this page.
1. DeepCAD-RT massively improves the imaging SNR of neuronal population recordings in the zebrafish brain.
3. DeepCAD-RT reveals the ATP (Adenosine 5β-triphosphate) dynamics of astrocytes in 3D after laser-induced brain injury.
More demo videos are presented on our website.
If you use this code, please cite the companion paper where the original method appeared:
-
Xinyang Li, Yixin Li, Yiliang Zhou, et al. Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat. Biotechnol. (2022). https://doi.org/10.1038/s41587-022-01450-8
-
Xinyang Li, Guoxun Zhang, Jiamin Wu, et al. Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising. Nat. Methods 18, 1395β1400 (2021). https://doi.org/10.1038/s41592-021-01225-0
@article {li2022realtime,
title = {Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit},
author = {Li, Xinyang and Li, Yixin and Zhou, Yiliang and Wu, Jiamin and Zhao, Zhifeng and Fan, Jiaqi and Deng, Fei and Wu, Zhaofa and Xiao, Guihua and He, Jing and Zhang, Yuanlong and Zhang, Guoxun and Hu, Xiaowan and Chen, Xingye and Zhang, Yi and Qiao, Hui and Xie, Hao and Li, Yulong and Wang, Haoqian and Fang, Lu and Dai, Qionghai},
journal={Nature Biotechnology},
year={2022},
publisher={Nature Publishing Group}
}
@article{li2021reinforcing,
title={Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising},
author={Li, Xinyang and Zhang, Guoxun and Wu, Jiamin and Zhang, Yuanlong and Zhao, Zhifeng and Lin, Xing and Qiao, Hui and Xie, Hao and Wang, Haoqian and Fang, Lu and others},
journal={Nature Methods},
volume={18},
number={11},
pages={1395--1400},
year={2021},
publisher={Nature Publishing Group}
}