Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory keeps increasing when user request data whose size < 1M (outbuf_overflow default ) #111

Closed
yuzhougit opened this issue Sep 17, 2015 · 2 comments

Comments

@yuzhougit
Copy link
Contributor

My server program runs on windows(32bit) with Waitress + Pyramid.
When I tested an http API's performance with Apache ab, and the results data's size < 1M, i found the the memory keep increasing util Memory error thrown at last.

The ab command looks like below

 ab -c 10 -n 100000 -k http://apiurl

The issue only happen in keep alive mode and data size is smaller than 1M.
After debug, I found the cause:

  • In keep alive mode, the http channel will not be closed in my test
  • When data size < 1M, channel will use memory file BytesIO as output buffer
  • In _flush_some function of channel.py, the output buffer will never be clear after all its data has been sent, see the code
 while True:
            outbuf = self.outbufs[0]
            # use outbuf.__len__ rather than len(outbuf) FBO of not getting
            # OverflowError on Python 2
            outbuflen = outbuf.__len__()
            if outbuflen <= 0:
                # self.outbufs[-1] must always be a writable outbuf
                if len(self.outbufs) > 1:
                    toclose = self.outbufs.pop(0)
                    try:
                        toclose.close()
                    except:
                        self.logger.exception(
                            'Unexpected error when closing an outbuf')
                    continue # pragma: no cover (coverage bug, it is hit)
                else:
                    # issue here, when outbuflen < 0, outbufs[0] should be clear at least
                    dobreak = True

I added a line code in the marked place, then the issue could be workaround, please check if it is a reasonable fix:

                else:
                    outbuf.prune()
                    dobreak = True
@digitalresistor
Copy link
Member

Create a PR for this. prune() here should not cause issues.

Only potential improvement I see here:

Instead of prune() in OverflowableBuffer we should add a .clear() that keeps the existing BytesIO or Temporary file storage in the backend and does not reset back to the string based buffer.

Although, that would penalize all other responses to possibly slower buffers due to one response being larger than 1MB.

Either way, adding the prune() is a good idea.

@yuzhougit
Copy link
Contributor Author

@bertjwregeer Thanks your confirm. A PR has been created.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants