You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The code is chunked but it processes each chunk serially. A potentially major speed-up would be to add a parallel implementation here. Only reads are required; no data is written, but I'm not sure what happens if you try to open an HDF5 file in read-only mode (see perhaps here for a start). My hope is that we can just pass the file name to different processes (and if the file is chunked properly so we never try to read the same chunk) we can just use multiprocessing and do it that way. I.e. something like:
zipped_args = zip([filename]*len(other_args), other_arg_1, other_arg_2,...)
for arg_set in zipped_args:
open the file in read-only mode and read the appropriate chunk
do the delay calculation
But I don't know if HDF5 will actually allow that to work.
The text was updated successfully, but these errors were encountered:
Code is tools/RAiDER/delayFcns.py:
RAiDER/tools/RAiDER/delayFcns.py
Line 123 in 8b397b1
The code is chunked but it processes each chunk serially. A potentially major speed-up would be to add a parallel implementation here. Only reads are required; no data is written, but I'm not sure what happens if you try to open an HDF5 file in read-only mode (see perhaps here for a start). My hope is that we can just pass the file name to different processes (and if the file is chunked properly so we never try to read the same chunk) we can just use multiprocessing and do it that way. I.e. something like:
But I don't know if HDF5 will actually allow that to work.
The text was updated successfully, but these errors were encountered: