You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to write a tensorstore array using dask on a single machine. I have a bunch of worker processes that each a) call tensorstore.open(spec, write=True).result(), and then attempt to assign a numpy array to a contiguous, chunk-aligned range of indices in the resulting tensorstore array. This does not appear to be running in parallel.
How can I make this work? Are there any best practices for writing to tensorstore arrays from multiple processes that I should be aware of?
The text was updated successfully, but these errors were encountered:
I made a minimal reproducible example which confirms what you say -- multiprocess writes work fine. So I must be doing something wrong in the code that wasn't parallelizing properly.
I'm trying to write a tensorstore array using dask on a single machine. I have a bunch of worker processes that each a) call
tensorstore.open(spec, write=True).result()
, and then attempt to assign a numpy array to a contiguous, chunk-aligned range of indices in the resulting tensorstore array. This does not appear to be running in parallel.How can I make this work? Are there any best practices for writing to tensorstore arrays from multiple processes that I should be aware of?
The text was updated successfully, but these errors were encountered: