New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] better work with large query responses #385
Comments
Example for second problem solve #386 |
Hi, it took a quick look at the code. That being said, it should be garbage collected and I don't see where in the code a memory leak could be. Just to be sure, did you check during your test that the garbage collector was triggered? nb: I'll take a look at your fix of pb 2 this week. |
Hi, thank you for a response, fixing these bugs is really important for our team now With redis we think the main problem is that garbage collector is not triggered immediately so heap grows much faster than it's been cleaned. We will run additional tests on it, maybe profiling showed total memory used and not peak memory usage If your team is ok with the general idea of fix to the second problem I can send proper PR with tests and etc. |
Yes we're ok with your second fix. |
I'm closing this issue since you have a fix for 1 and your fix for 2 will soon be merged. |
@kasimtj 1.26.0 is released and includes your fix. |
Thanks a lot for a quick response! |
Is your feature request related to a problem? Please describe.
We had a problem with memory consumption of chproxy and found 2 problems.
Reading large payload from Redis creates 2MiB buffers in cycle and garbage collector unable to free memory properly
If proxy request failes or gets canceled chproxy tries to extract error reason and large tmp file with partial response will be read into memory
Describe the solution you'd like
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: