Patch: workaround for libcurl CURL_MAX_WRITE_SIZE #101

elephantum opened this Issue Jun 19, 2010 · 4 comments

2 participants


The problem is the following: even the libcurl's documentation says that there is no need to run .perform() again after the E_OK is returned, the truth is more subtle. E_OK is returned in two different cases: 1) there is really nothing to do, 2) CURL_MAX_WRITE_SIZE of data was passed to write_function. There is no way to check whether there is any data available right now. Even epoll'ing wouldn't help in some cases, when data is read into internal libcurl's buffers. This behavior significantly degrades performance of fetching http-resources which are bigger than 16Kb. It is especially well seen when some synchronous work should be done (50-100ms XSL transformations in my case).

The problem can be illustrated with this snippet of code: run it before and after the patch to feel the difference in downloading speed.

I've developed a workaround for the given problem which I propose for merge into tornado:


proofpic from production:


the same problem exists in recently introduced AsyncHTTPClient2. patch:


some more data from production:

distribution of number of multi.perform() calls at a time:
distribution of duration of multi.perform() chain:

tornadoweb member

Closing since I'm not sure if this is still an issue and the proposed patch has been deleted. If it's still a problem feel free to reopen.

@bdarnell bdarnell closed this Apr 28, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment