This software is distributed as accompanying software for the manuscript: P. Zhang, D. Ma, X. Cheng, A. P. Tsai, Y. Tang, H. Gao, L. Fang, C. Bi, G. E. Landreth, A. A. Chubykin, F. Huang, "Deep Learning-driven Adaptive Optics for Single-molecule Localization Microscopy" (2023) Nature Methods, Advanced Online Publication, doi: https://doi.org/10.1038/s41592-023-02029-0
Example Data:
- MirrorMode.mat: Measured mirror deformation modes used for DL-AO
- SystemPupil.mat: Measured pupil phase and magnitude under instrument optimum
Matlab scripts:
- helpers\PSF_MM.m: Simulation of PSFs with optical aberration
- helpers\OTFrescale.m: OTF rescaling of the simulated PSFs
- helpers\Net1Filter.m: Function for selecting detectable PSFs
- helpers\filterSub.m: Function for identifying pixels containing local maximum intensities
- main.m: Main script for generating training dataset
Example Data:
- data.mat: A small training dataset containing simulated PSFs.
- label.mat: Labels of the training dataset
- testdata.mat: A small test dataset
- testlabel.mat: Underlying true positions to compare estimation results
- Network1.pth: Trained model1 for DL-AO
- Network2.pth: Trained model2 for DL-AO
- Network3.pth: Trained model3 for DL-AO
Python scripts:
- main.py: Main script for training neural network
- model.py: Script for neural network architecture definition
- opts.py: Definitions of user adjustable variables
- test.py: Script for testing training result
Note:
- Example data only showed 1000 sub-regions. Training for DL-AO used 6 million images. Additional training and validation datasets are available upon request.
- Network1-3 are three models with different training ranges used in DL-AO. The detailed training range sees Supplementary Table 4.
Google Colaboratory (Colab) Notebook:
- DL-AOInferenceDemo.ipynb: Script for testing training results in Colab
Python scripts (Modified from scripts in section 1.2 for execution in Colab):
- model.py: Script for neural network architecture definition
- opts.py: Definitions of user adjustable variables
Note: ExampleData is required for running the Jupyter Notebook. Before running ‘DL-AOInferenceDemo.ipynb’, you need to copy the following data into this folder:
- MirrorMode.mat (in ./Training Data Generation/ExampleData)
- testdata.mat (in ./Training and testing for DL-AO/ExampleData)
- teslabel.mat (in ./Training and testing for DL-AO/ExampleData)
- Network1.pth (in ./Training and testing for DL-AO/ExampleData)
- Network2.pth (in ./Training and testing for DL-AO/ExampleData)
- Network3.pth (in ./Training and testing for DL-AO/ExampleData)
Change MATLAB current folder to the directory that contains main.m. Then run main.m to generate the training dataset. (uncomment Line 90 when generating training data for Net1)
The code has been tested on the following system and packages:
Microsoft Windows 10 Education, Matlab R2020a, DIPimage2.8.1 (http://www.diplib.org/).
The code has been tested on the following system and packages:
Ubuntu16.04LTS, Python3.6.9, Pytorch0.4.0, CUDA10.1, MatlabR2015a
python main.py --datapath ./ExampleData --save ./Models
The expected output and runtime with the small example training dataset is shown below:
Due to insufficient training data included in 'ExampleData', the validation error is inf. More training datasets can be generated with the Matlab code described in the Section2. An example output wiht 100 times more training data is shown below:
python test.py --datapath ./ExampleData/ --save ./result –checkptname ./ExampleData/Network2
The expected output and runtime wiht the small testing daataset is shown below:
Note:
- Each iteration will save a model named by the iteration number in folder ‘./Models/’
- The user could open errorplot.png in folder ‘./Models/’ to observe the evolution of training and validation errors.
- The user adjustable variables for training will be saved in ‘/Models/opt.txt’
- The training and validation errors for each iteration will be saved in ‘Models/error.log’ (The 1st column is training error and the 2nd column is validation error)
- Pytorch Installation Instruction and typical installation time see: https://pytorch.org/get-started/locally/
This code has been tested in Google Chrome Browser on Google Colaboratory (Colab, https://colab.research.google.com/), which provides free access to essential computing hardware such as GPUs or TPUs used in DL-AO. Besides, all the packages essential for testing DL-AO are pre-installed in Colab. Therefore, no installation processes are required on your local PC.
To run the DL-AO inference with trained network:
-
Open https://drive.google.com/drive/my-drive in Google Chrome Browser and log in with your Gmail account.
-
Drag the entire ‘DL-AO_Inference_Demo_Colab’ folder into the ‘My Drive’ folder in your Google Drive.
-
Open ‘DL-AO_Inference_Demo_Colab’ folder in your Google Drive and you should see the files as follows:
- Right-click the Jupyter Notebook named ‘DL-AOInferenceDemo.ipynb’, then select “Google Colaboratory” to open the Notebook in Colab
- To enable GPU in your notebook, click "Edit", choose "Notebook Setting", then select “GPU” as Hardware Accelerator in “Notebook Setting”
- Run the Code Cell in ‘DL-AOInferenceDemo.ipynb’ by clicking the “run” button in the top left corner. There are two Code Cells in the ‘DL-AOInferenceDemo.ipynb’ Notebook. The first one is for testing the DL-AO network, and the second one is for displaying the test result. An example output of these Code Cells is attached as follows:
Note:
- We compare the estimation with the ground truth using wavefront shape, instead of mirror mode coefficients. This is because one wavefront shape can correspond to different mirror mode coefficients due to the coupling between experimental mirror deformation modes.
- A step-by-step explanation on the Colab code can be found in Section3 of "Instruction for Testing Network in Web Browser.pdf"