Skip to content

Commit

Permalink
Merge d902c98 into aaffce7
Browse files Browse the repository at this point in the history
  • Loading branch information
fedem-p committed Nov 4, 2022
2 parents aaffce7 + d902c98 commit b1c27f5
Show file tree
Hide file tree
Showing 23 changed files with 694 additions and 44 deletions.
17 changes: 1 addition & 16 deletions .github/workflows/doc_deploy.yml
Expand Up @@ -4,22 +4,7 @@ on:
push:
branches:
- master
pull_request:
types:
- assigned
- unassigned
- labeled
- unlabeled
- opened
- edited
- reopened
- synchronize
- ready_for_review
- locked
- unlocked
- review_requested
- review_request_
workflow_dispatch:
workflow_dispatch:

jobs:
deploy-book:
Expand Down
27 changes: 21 additions & 6 deletions docs/_toc.yml
Expand Up @@ -3,17 +3,32 @@ root: index
parts:
- caption: Overview
chapters:
- file: timeline
- caption: Configuration
- file: overview/intro
- file: overview/structure
- file: overview/features
- file: overview/FAQ
- caption: User Guide
chapters:
- file: configuration/installation
- file: configuration/configuration
- caption: Development
- file: user_guide/installation
- file: user_guide/configuration
- file: user_guide/hardware_support
- file: user_guide/GUI
- file: user_guide/calibration
- file: user_guide/planar_mode
- file: user_guide/volume_mode
- caption: Developer Guide
chapters:
- file: development/code_organization
- file: developer_guide/code_architecture
- file: developer_guide/hardware_control
- file: developer_guide/multiprocessing
- file: developer_guide/modes
- file: developer_guide/scanning
- caption: Hardware
chapters:
- file: hardware/hardware
- caption: Api
chapters:
- file: api/index
- caption: References
chapters:
- file: overview/references
15 changes: 15 additions & 0 deletions docs/developer_guide/code_architecture.md
@@ -0,0 +1,15 @@
# Code architecture

Sashimi is structured in modules that communicate with each other via signals and use shared queues to broadcast data to different processes.
In a simplified schematic we can see how the `State` file is the core of the program, which controls and ties together the core functions.

It communicates with GUI and updates values for the GUI to read, it creates and overlooks processes that then are responsible for directly controlling the hardware through custom interfaces.
Moreover, the State controls the _Global State_ variable of the program which defines the mode in which the program is. This is used by the GUI to change the interface and settings accordingly and by the different processes to control the hardware.

```{figure} ../images/sashimi_struct.png
---
height: 500px
name: sashimi-struct
---
Sashimi simplified code structure
```
68 changes: 68 additions & 0 deletions docs/developer_guide/hardware_control.md
@@ -0,0 +1,68 @@
# Hardware Control

One of the strengths of Sashimi is the smart use of interfaces for the connection with potentially any hardware component.
Whenever a component needs to be swapped it can be easily done by just creating a custom python file for its connection to the Sashimi interface. The main hardware components needed are a light source (laser), a camera, a piezo and multiple galvos for directing the laser and creating a sheet of light, and, finally, a board to drive most of the triggers.
Each interface is defined as an Abstract class which ensures that while creating a new custom module, inheriting from the interface, all the functions and properties necessary for the software to work will be implemented.

## Light-Source Interface

The light source interface enforces that any custom light source module defines a method for setting the power of the laser, a method for closing the laser, and properties for reading the intensity of the laser and the status (ON, OFF).
At the moment, only two modules are present in the software:

- Mock light source, which mocks the behavior of the light source and it’s used mainly for testing purposes
- Cobolt light source, which takes care of the opening and setting up of the Cobolt laser.

## Camera Interface

The camera interface outlines the functions and properties needed for the control of the camera and for the adjustment of the most relevant settings.
The core methods are start/stop acquisition, shutdown the camera, and most importantly, get_frames which returns a list of images.
Using the camera interface it’s possible to change the following properties:

- Binning size
- Exposure time
- Trigger mode, which is expected to be external for Sashimi volumetric mode
- Frame Rate, which can only be read (to change this value you’ll need to change the exposure time)
- Sensor resolution, which computes the resolution based on the binning size and the maximum resolution (setting to be defined in the configuration file)

For this interface there’s a mock module which displays a gaussian filtered noise image, and a module for the Hamamatsu Orca flash 4.0 v3 camera.

## External-Trigger Interface

The external trigger interface allows synchronizing Sashimi with behavioral software for stimulus presentation. To achieve this it uses the python bindings of ZeroMQ, a high-performance asynchronous messaging library.
The interface takes care of establishing the connection to the other software and returning the total duration of the experiment protocol.
The duration will be then used by the external communication process to update the acquisition duration of the experiment inside Sashimi.
There’s a module that allows for built-in connection between Sashimi and [Stytra](https://www.portugueslab.com/stytra/index.html), an open-source software package, designed to cover all the general requirements involved in larval zebrafish behavioral experiments. Once an experiment protocol is ready, and both softwares are set up correctly, Stytra will stand by and wait for a message from the acquisition software (Sashimi) to start the experiment. This message is automatically sent once the acquisition start is triggered in the Sashimi GUI.

## Scanning Interface

The scanning interface is the more complicated interface since it needs to handle the NI board which in turn controls the scanning hardware.
This interface outlines three simple methods to write and read samples from the board, as well as initialization of the board and start of the relevant tasks.
There are multiple properties that control different functionalities:

- z_piezo, reads and writes values to move the piezo vertically.
- z_frontal, reads and writes values to move the frontal laser vertically.
- xy_frontal, reads and writes values to move the frontal laser horizontally.
- z_lateral, reads and writes values to move the lateral laser vertically.
- xy_lateral, reads and writes values to move the lateral laser horizontally.
- Camera_trigger, triggers the acquisition of one frame.

The implementation of the scanning interfaces connects to the NI board and initialize three analog strea:

- xy_writer, which combines the frontal and lateral galvos moving the laser horizontally and outputs a waveform.
- z_reader, which reads the piezo position.
- z_writer, which combines the frontal and lateral galvos moving the laser vertically, the piezo and the camera trigger. For each of them the output varies depending on the mode in which the software is.

Inside the config file there’s a factor that allows applying a rescaling factor to the piezo.

Another major part of the interface is the implementation of different scanning loops to continuously move the laser to form a sheet of light and move it in z synchronously with the piezo in order to keep the focus.
There is a main class called ScanLoop which continuously checks whether the settings have changed, fills the input arrays with the appropriate waveform, writes this array on the NI board (through the scanning interface), reads the values from the NI board, and keeps a count of the current written and read samples.
Two classes inherit from this main class:

- PlanarScanLoop
- VolumetricScanLoop

The main difference between the two is the way they fill the arrays responsible to control the vertical movement of Piezo and galvos.
Inside the planar loop there’s two possible modes, one of which is used for calibration purposes and it’s completely manual. In this mode the piezo is moved independently of the lateral and frontal vertical galvos. This allows for proper calibration of the focus plane for each specimen placed in the microscope.
The other mode is synched and uses the linear function computed by the calibration to compute the appropriate value for each galvo, based on the piezo position.
The volumetric loop instead writes a sawtooth waveform to the piezo, then it reads the piezo position and computes the appropriate value to set the vertical galvos to.
Given the desired framerate, it will also generate an array of impulses for the camera trigger, where the initial or final frame can be skipped depending on the waveform of the piezo. For ease of use the waveform is shown in the GUI so that the user can decide how many frames to skip depending on the settings that they inserted.
53 changes: 53 additions & 0 deletions docs/developer_guide/modes.md
@@ -0,0 +1,53 @@
# Modes Panel

## Calibration Mode

The calibration mode is divided in 2 sections, at the top there are three sliders which allow the user to manually set the vertical frontal galvo, the vertical lateral galvo and the piezo.
Below there are two buttons to add or remove a calibration point.
And at the bottom there is another section for activating the noise subtraction function.
The calibration routine works as follows:

- Firstly, you cover one laser beam (either the frontal or lateral)
- Move the piezo in a position
- Adjust the non-covered laser beam with the corresponding vertical galvo slider until the image in the viewer is sharp enough
- Cover the other laser and adjust the non-covered laser beam with the corresponding vertical galvo slider until the image in the viewer is sharp enough
- Add the calibration point with the button
- Repeat for multiple piezo positions

To choose the piezo position two things are important: firstly, it's best if the piezo position used for the calibration includes the vertical scanning range that will be set in the volumetric mode, secondly, the more calibration points are present the more accurate the calibration will be.
The software will try to fit a linear function to the points, so the more points the more accurate the estimate will be.
The second section of the calibration mode allows the user to activate a noise subtraction filter. Once it’s activated a pop-up will ask the user to turn off all the lights and the software will take an n number of images (see fig) to compute the average sensor noise.
This is then saved and subtracted to all the following acquired frames.

## Planar Mode

```{warning}
This mode is still under construction and doesn't fully work yet!
```

The planar mode is quite simple and has only two modifiable inputs:

- A slider to adjust the piezo position (here the galvos will be moved of an amount which is computed using the calibration points)
- A frequency selector, that allows us to set the frequency at which we desire to acquire images.

## Volumetric Mode

The volumetric mode is the core function of the software, since it allows for fast volume acquisition.
It can be divided in two sections, the first one allows for the input of various settings, while the second one displays the waveform of the piezo vs lateral galvo with the camera triggers overlaid on top.
The setting that can be input to the software are the following:

- A scanning range from ventral to rostral piezo position
- A volume rate per second
- The number of planes to acquire for each volume
- The number of planes to skip at the beginning of each volume
- The number of planes to skip at the end of each volume
- A button to select whether to pause the live view after the experiment

To understand why it may be necessary to drop some planes at the beginning and end of the volume acquisition is important to understand how the volumetric scanner works.
During the volumetric scanning the piezo keeps moving following a waveform (see fig), and the galvos follow it based on the calibration. During the acquisition of each frame, the piezo will not be stopping, hence it will keep moving following the waveform. During the constant incline, except for long exposures, the effects of the movement are outweighed by the increased performance. However in some sections of the waveform this will generate noise and unwanted artifacts which can be avoided by dropping any frame that does not line up with the linear part of the piezo waveform.
This can be easily seen in the second section of the volumetric widget, where the waveform and camera frames are plotted. The camera impulses are also stretched to match the length of the exposure time, this allows the user to intuitively see whether there may be overlapping frames when volume rate and exposure time are not matched correctly (see fig).

```{attention}
If you experience de-focusing when moving from planar mode (or calibration) to the volumetric mode it may be due to an imprecise calibration of the piezo inside the configuration file.
To check, make sure that the value you write in the piezo is then the acqual value the piezo reads using an oscilloscope connected to the NI board
```
53 changes: 53 additions & 0 deletions docs/developer_guide/multiprocessing.md
@@ -0,0 +1,53 @@
# Multiprocessing

Multiprocessing allows programs to run multiple sets of instructions in separate cores at the same time. The correctly implemented programs run much faster and take full advantage of multi-core CPUs. Python has an entire built-in library that offers multiprocessing tools and functionalities, such as the spawning and synching of separate processes, and the sharing of information between them.
In microscopy it is crucial to have events that happen fast and synchronously to one another, especially for light-sheet microscopy, where the piezo, vertical galvos, and the camera trigger must be synchronized in order to deliver a focused image.
Multiprocessing ensures that multiple hardware components and functionalities can work simultaneously, and even more importantly can redistribute priority to make sure that the most important tasks are executed in the correct time frame, while other, less time sensitive tasks, can be processed less promptly.
As an example, in a lightsheet microscope it is of the utmost importance that galvos, camera triggers and piezo are synchronized, while the process that saves the data in the memory can work asynchronously of the other processes. Obviously, most processes need to keep up with the whole program in order to avoid stalls and delays, but given enough speed and buffers some functionalities don’t need to be as precise and synched as others. This allows the computer to also allocate resources dynamically to fulfill the tasks at hand.

## Logging

The logging process is a simple class which implements a concurrence logger inside it. The logger has built-in functions to log events, queues and any particular message in a custom file.
Any other process inherits from this class, and will have a logger built-in which will make logging events and messages easy and organized.
To automatically log inside events there’s another class called LoggedEvent which accepts a range of internally defined events and a logger, and returns an Event (from the multiprocessing library) which expands each functionality of the event class with an in-built logger.

1. The main process creates a LoggedEvent
2. The LoggedEvent is passed to one of the processes
3. The process assigns it’s logger to the LoggedEvent
4. Now, every time the event is set, cleared or pinged, it will be automatically logged in the process logging file

## Camera

The camera process handles all the camera related functionality, sets the camera parameters, mode and trigger.
It computes and checks the framerate and runs the camera in a mode-dependent loop.
If the current program mode is paused, then the loop waits for the mode to change and keeps checking for updates of the camera parameters, on the other hand, if the mode is preview, the loop gets new frames from the camera and inserts them in a queue, at the end it checks for changes in the camera parameters and eventually updates them.
Until the program is closed, the camera is kept in this constant loop between preview mode and pause mode. The last possible camera mode is used to abort the current preview, stop the camera, and set the paused mode.

## Scanning

The scanning process leverages the implementations of the scanning loops inside the interface and it mainly sets the loop and updates the relevant settings.
It initializes the board, settings, and queues, which then passes to a loop object. This loop object will be either an implementation of the PlanarScanLoop or of the VolumetricScanLoop depending on the mode in which the program is.

## External Communication

The external communication process uses the connection made by the external trigger interface to keep updating the settings, and checking the trigger conditions.
Once these conditions are set, it sends a trigger and receives the duration of the experiment.
The duration is then inserted inside a queue, where it will be read by the main process and used to compute the end signal of the acquisition.

## Dispatching & Saving

There are two more processes that take care of the setup of the volumes and their saving in memory.
The dispatcher process runs a loop where it gets the newest settings and gets a frame from the camera process queue.
This frame is then optionally filtered from the sensor background noise (this can be activated in the volumetric mode widget) and stacked with others until it completes a volume.
The volume is then fed to two queues, one is for saving the volume and the other one is for the preview which is displayed by the viewer.
The saving process is a bit more complex since it also holds the saving parameters and the saving status (which is important to keep track of the current chunk which has been saved).
The saving loop executes the following actions:

1. Initialize the saving folder
2. Reads a volume from the dispatcher queue
3. Calculates the optimal size for a file to be saved in chunks based on the size of the data and the ram memory
4. It stores n volumes until it reaches the optimal size
5. It saves the chunk in an .h5 file
6. Once it finishes saving it saves a .json file with all the metadata inside

This dynamical approach to the saving process ensures that the program doesn't get overloaded while trying to save the acquired data.
15 changes: 15 additions & 0 deletions docs/developer_guide/scanning.md
@@ -0,0 +1,15 @@
# Scanning

````{warning}
UNDER CONSTRUCTION
```
___
/======/
____ // \___
| \\ //
|_______|__|_//
_L_____________\o
__(CCCCCCCCCCCCCC)____________
```
````
8 changes: 0 additions & 8 deletions docs/development/code_organization.md

This file was deleted.

0 comments on commit b1c27f5

Please sign in to comment.