-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support directories with millions of files. #95
Comments
Any more detail for this issue? In what kind of operation are the file attributes fetched? |
When you do |
Got it, I can take a try on this one. |
If there are millions of files, we might run into trouble when the first |
Yes. Right now, we don't have a work around, that could be the next challenge. |
The second part is done by #128 |
What would you like to be added:
Currently, we fetch the attributes of files in single directory with single batch request to Redis, that could be slow or fail, and block other requests.
We can split those into small batches, for example, 1000 per batch.
Why is this needed:
The number of files could be millions, we don't want people be bited by that.
Backlog
MGET
with small batches Avoid calling mget with massive number of keys in Readdir #110HSCAN
instead ofHGETALL
List large directories as small batches #128The text was updated successfully, but these errors were encountered: