Memory keeps increasing when user request data whose size < 1M (outbuf_overflow default ) #111

Closed
yuzhougit opened this Issue Sep 17, 2015 · 2 comments

Projects

None yet

2 participants

@yuzhougit
Contributor

My server program runs on windows(32bit) with Waitress + Pyramid.
When I tested an http API's performance with Apache ab, and the results data's size < 1M, i found the the memory keep increasing util Memory error thrown at last.

The ab command looks like below

 ab -c 10 -n 100000 -k http://apiurl

The issue only happen in keep alive mode and data size is smaller than 1M.
After debug, I found the cause:

  • In keep alive mode, the http channel will not be closed in my test
  • When data size < 1M, channel will use memory file BytesIO as output buffer
  • In _flush_some function of channel.py, the output buffer will never be clear after all its data has been sent, see the code
 while True:
            outbuf = self.outbufs[0]
            # use outbuf.__len__ rather than len(outbuf) FBO of not getting
            # OverflowError on Python 2
            outbuflen = outbuf.__len__()
            if outbuflen <= 0:
                # self.outbufs[-1] must always be a writable outbuf
                if len(self.outbufs) > 1:
                    toclose = self.outbufs.pop(0)
                    try:
                        toclose.close()
                    except:
                        self.logger.exception(
                            'Unexpected error when closing an outbuf')
                    continue # pragma: no cover (coverage bug, it is hit)
                else:
                    # issue here, when outbuflen < 0, outbufs[0] should be clear at least
                    dobreak = True

I added a line code in the marked place, then the issue could be workaround, please check if it is a reasonable fix:

                else:
                    outbuf.prune()
                    dobreak = True
@bertjwregeer
Member

Create a PR for this. prune() here should not cause issues.

Only potential improvement I see here:

Instead of prune() in OverflowableBuffer we should add a .clear() that keeps the existing BytesIO or Temporary file storage in the backend and does not reset back to the string based buffer.

Although, that would penalize all other responses to possibly slower buffers due to one response being larger than 1MB.

Either way, adding the prune() is a good idea.

@yuzhougit
Contributor

@bertjwregeer Thanks your confirm. A PR has been created.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment