Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File streaming and concurrent acquisition support #171

Open
jacopoabramo opened this issue Jun 13, 2023 · 0 comments
Open

File streaming and concurrent acquisition support #171

jacopoabramo opened this issue Jun 13, 2023 · 0 comments

Comments

@jacopoabramo
Copy link
Collaborator

jacopoabramo commented Jun 13, 2023

I did some refactor on the RecordingManager and also the DetectorManager.

What I did was:

  • use the napari threading framework to spawn multiple QRunnable to stream data from a detector to a file;
    • one thread is spawned for each detector currently flagged forAcquisition;
  • add streaming support for OME-TIFF and HDF5 files support;
  • change the getChunk method in the DetectorManager class to return a tuple of two numpy arrays:
    • one for the actual data chunk to be written;
    • one for a list of frame IDs used for checking possible frame losses during long acquisitions;
  • add a synchronization mechanism to update the GUI whenever a new recording is done similarly to what was already being implemented.

Not everything is currently supported:

  • time lapses are not supported yet;
  • the synchronization when using multiple cameras is a bit buggy;
  • zarr streaming is still missing;
  • data chunks are continously collected by a direct call to the getChunk function of the DetectorManager class; this is inefficient when one has (like I have) an instrument which aims for going at very high acquisition speed (i.e. hundreds of microseconds of exposure time, even lower).

What I would like to have:

  • a thread-safe memory buffer which allows a camera to write on a specific RAM section continously, while the streaming object continously write it. If the streaming object is fast enough there should be no data losses.
  • for reduced data storage requirements, support for lossy/lossless compression algorithms would be nice.

EDIT: fixed links

This was referenced Jun 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant