Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep-alive broken in 1.10.3 with chunked transfer-encoding #598

Closed
lmikkelsen opened this issue Apr 24, 2015 · 18 comments · Fixed by #599
Closed

Keep-alive broken in 1.10.3 with chunked transfer-encoding #598

lmikkelsen opened this issue Apr 24, 2015 · 18 comments · Fixed by #599

Comments

@lmikkelsen
Copy link

Per kennethreitz/requests#2568, starting in urllib3 1.10.3 an exception is thrown if a connection with chunked transfer-encoding is reused.

@sigmavirus24
Copy link
Contributor

Thanks @lmikkelsen

@ne0ark
Copy link

ne0ark commented Apr 24, 2015

Yeh I am also having same issue is there a solution for this?

@Lukasa
Copy link
Sponsor Contributor

Lukasa commented Apr 24, 2015

We're working on it. =) Keep an eye on this issue for more.

On 24 Apr 2015, at 18:09, ne0ark notifications@github.com wrote:

Yeh I am also having same issue is there a solution for this?


Reply to this email directly or view it on GitHub.

@ne0ark
Copy link

ne0ark commented Apr 24, 2015

It seems like this is broken in mechanize as well.

@untitaker
Copy link
Contributor

I get a similar exception with the following script, but the traceback is different:

import requests

s = requests.session()

r = s.get('http://httpbin.org/stream/20')
s.get('http://httpbin.org/stream/20')

Start of traceback (full):

Traceback (most recent call last):
  File "/home/untitaker/projects/requests/requests/packages/urllib3/connectionpool.py", line 372, in _make_request
    httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'

During handling of the above exception, another exception occurred:

Happening with Python 3.4.3

@Lukasa
Copy link
Sponsor Contributor

Lukasa commented Apr 25, 2015

Please don't print the start of the traceback, it's irrelevant. Print the end of it. =D

@Lukasa
Copy link
Sponsor Contributor

Lukasa commented Apr 25, 2015

To be clear @untitaker, your traceback is different because you've got Python 3 printing the chained exceptions. If the others users did, they'd show the same traceback as you.

@ne0ark
Copy link

ne0ark commented Apr 25, 2015

Here is my traceback using urllib2:

https://gist.github.com/ne0ark/a451c13de9e6643a751e

@sigmavirus24
Copy link
Contributor

So using just urllib3, I'm having trouble reproducing this. The following is more-or-less what an adapter does in requests

import urllib3

url = 'http://httpbin.org/stream/20'
http = urllib3.PoolManager(
    block=False,
    num_pools=10,
    maxsize=10,
    strict=True,
)
conn = http.connection_from_url(url)
response = conn.urlopen(
    method='GET',
    url=url,
    headers={
        'Connection': 'keep-alive',
        'Accept-Encoding': 'gzip, compress',
    },
    preload_content=False,
    decode_content=False,
)
list(response.stream())
response = conn.urlopen(
    method='GET',
    url=url,
    headers={
        'Connection': 'keep-alive',
        'Accept-Encoding': 'gzip, compress',
    },
    preload_content=False,
    decode_content=False,
)
list(response.stream())

@t-8ch
Copy link
Contributor

t-8ch commented Apr 26, 2015

53a1e6a is the culprit.

@sigmavirus24
Copy link
Contributor

@t-8ch do we really think closing the connection is the right thing to do here?

Edit that does fix the issue though

@t-8ch
Copy link
Contributor

t-8ch commented Apr 26, 2015

@sigmavirus24 I don't think so. But somehow httplib has to be conviced, that we are done with this response.

@t-8ch
Copy link
Contributor

t-8ch commented Apr 26, 2015

Doing

if self._original_response is not None:
    self._original_response.fp = None
self.release_conn()

fixes the bug but is missing testcoverage

@t-8ch
Copy link
Contributor

t-8ch commented Apr 26, 2015

@sigmavirus24 It seems even weirder. Running your example from upthread shows me three requests on two sockets in wireshark.
The second request is duplicated and sent on two sockets. I assume the exception is caught somewhere.

Using requests gives me two requests on one socket (as expected, plus the exception).

@t-8ch
Copy link
Contributor

t-8ch commented Apr 26, 2015

Yeah, ofc retries... So requests disables retries which explains the differences.

@t-8ch
Copy link
Contributor

t-8ch commented Apr 26, 2015

Also:
testing against a local httpbin works (gunicorn does not use keepalives...). And reading the documentation for the release_con parameter to ConnectionPool.__init__ we should not try to automatically return the connection back to the pool, if we set preload_content (which we do).

@lmikkelsen Did you run into this problem using httpbin?

t-8ch added a commit to t-8ch/urllib3 that referenced this issue Apr 26, 2015
t-8ch added a commit to t-8ch/urllib3 that referenced this issue Apr 26, 2015
FIXME better description
@sigmavirus24
Copy link
Contributor

To be clear, you can reproduce this with:

import urllib3

url = 'http://httpbin.org/stream/20'
http = urllib3.PoolManager(
    block=False,
    num_pools=10,
    maxsize=10,
    strict=True,
    retries=0,
)
conn = http.connection_from_url(url)
response = conn.urlopen(
    method='GET',
    url=url,
    headers={
        'Connection': 'keep-alive',
        'Accept-Encoding': 'gzip, compress',
    },
    preload_content=False,
    decode_content=False,
)
list(response.stream())
response = conn.urlopen(
    method='GET',
    url=url,
    headers={
        'Connection': 'keep-alive',
        'Accept-Encoding': 'gzip, compress',
    },
    preload_content=False,
    decode_content=False,
)
list(response.stream())

For some reason I thought we were using something like 5 for the retries value.

@sigmavirus24
Copy link
Contributor

I've been looking into @t-8ch's solutions and calling self._original_response.close() simply rids us of the socket's fileobject. It also looks like @t-8ch has some commits that add tests and a fix for this. I'll wait for him to shape those up into something he's willing to send as a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants