delay updating hdf5 files in processing threads #22

ddale opened this Issue Apr 17, 2011 · 2 comments


None yet

1 participant

ddale commented Apr 17, 2011

Lets use a workaround for the core dumps seen when processing data. Rather than attempting to update the hdf5 file after each point, instead lets send a qt signal containing a dictionary of the new data. The main thread can receive that signal, update the maps that it contains in memory, and only update the file from the main event loop after all processing is complete.

ddale commented Nov 11, 2011

This may have been addressed by the cython FastRLock we added to h5py, which does a better job of synchronizing the h5py library.

ddale commented Jan 3, 2012

commit dd3234e implemented a results proxy that updates maps in memory.

commit c4c0e8f implemented the FastRLock to provide efficient synchronization of the results proxy.

Core dumps were addressed by commit 00dd37957de2 in h5py, will be available in h5py-2.1.0.

@ddale ddale closed this Jan 3, 2012
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment