Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch reading content files to prevent too many open files error #12079

Merged
merged 5 commits into from
Sep 25, 2023

Conversation

thecrypticace
Copy link
Contributor

This should help prevent Node from keeping too many files open at one time. The batching limit is still pretty high (up to 500 files at one time) and we probably have some room to reduce it but we can look into doing that in the future if we end up needing to.

Now, this implementation isn't technically as efficient as it could be. An asynchronous queue could be used to process all files while keeping 500 reads in flight at any given time until there are fewer than 500 files left to process. The implementation for this would probably be more complexity than necessary. So this simple batching solution should suffice for now.

Fixes #12069

We shouldn’t need to do this for our Rust code because it utilizes Rayon’s default thread pool for parallelism. This threadpool has roughly the number of cores as the number of available threads except when overridden. This generally is much, much lower than 500 and can be explicitly overridden via an env var to work around potential issues with open file descriptors if anyone ever runs into that.
@thecrypticace thecrypticace merged commit aaca7c4 into master Sep 25, 2023
10 checks passed
@thecrypticace thecrypticace deleted the feat/async-content-batching branch September 25, 2023 16:18
thecrypticace added a commit that referenced this pull request Oct 23, 2023
…12079)

* Refactor

* Refactor

* Batch content file reads in Node into groups of 500

We shouldn’t need to do this for our Rust code because it utilizes Rayon’s default thread pool for parallelism. This threadpool has roughly the number of cores as the number of available threads except when overridden. This generally is much, much lower than 500 and can be explicitly overridden via an env var to work around potential issues with open file descriptors if anyone ever runs into that.

* Fix sequential/parallel flip

* Update changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant