You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Directory listings with lots of big files in can take a very long time to read because fetching the metadata for each file is done serially with reading the directory listing.
This could be sped up by fetching the metadata where necessary in parallel.
Update This also came up here and the idea of deferring the reading of the metadata until Size() is called. That won't help ls etc, but it will parallelize the reads and defer them until they are used. Unfortunately the API signature of Size doesn't allow an error return.
The text was updated successfully, but these errors were encountered:
Directory listings with lots of big files in can take a very long time to read because fetching the metadata for each file is done serially with reading the directory listing.
This could be sped up by fetching the metadata where necessary in parallel.
See the forum for original discussion
Update This also came up here and the idea of deferring the reading of the metadata until Size() is called. That won't help ls etc, but it will parallelize the reads and defer them until they are used. Unfortunately the API signature of Size doesn't allow an error return.
The text was updated successfully, but these errors were encountered: