New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve S3 asset read performance #17835
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, ran it against autocannon with a couple different scenarios to stress the resources on the box, and never hit any resource limits sending or accessing s3.
* Create new s3 client for each read * Temp disable ts while debugging * Add concurrency test * Add minio to other tests * Reduce unavailable count * Trigger blackbox tests whenever packages are updated * Prevent minio-mc from exiting * Decrease requests and increase test timeout * Spam more requests over longer period * Increase request timeout * Run autocannon directly with larger image * Fix tests * Lock version * My favorite file --------- Co-authored-by: ian <licitdev@gmail.com>
When doing very large quantities of read requests, the S3 Client has the chance to slow down / stall the process completely. By using a fresh S3 client instance for each read request, the API is able to process way more traffic from S3.
Todo
index.test.ts