Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

update docs

  • Loading branch information...
commit 1451ba0c6d395c41f86da35036fa361c3a41bc90 1 parent 480a382
@kennethreitz authored
View
86 docs/user/advanced.rst
@@ -116,10 +116,10 @@ If you specify a wrong path or an invalid cert::
Body Content Workflow
---------------------
-By default, when you make a request, the body of the response is downloaded immediately. You can override this behavior and defer downloading the response body until you access the :class:`Response.content` attribute with the ``prefetch`` parameter::
+By default, when you make a request, the body of the response is downloaded immediately. You can override this behavior and defer downloading the response body until you access the :class:`Response.content` attribute with the ``stream`` parameter::
tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'
- r = requests.get(tarball_url, prefetch=False)
+ r = requests.get(tarball_url, stream=True)
At this point only the response headers have been downloaded and the connection remains open, hence allowing us to make content retrieval conditional::
@@ -129,8 +129,6 @@ At this point only the response headers have been downloaded and the connection
You can further control the workflow by use of the :class:`Response.iter_content` and :class:`Response.iter_lines` methods, or reading from the underlying urllib3 :class:`urllib3.HTTPResponse` at :class:`Response.raw`.
-Note that in versions prior to 0.13.6 the ``prefetch`` default was set to ``False``.
-
Configuring Requests
--------------------
@@ -143,19 +141,7 @@ Keep-Alive
Excellent news — thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection!
-Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set ``prefetch`` to ``True`` or read the ``content`` property of the ``Response`` object.
-
-If you'd like to disable keep-alive, you can simply set the ``keep_alive`` configuration to ``False``::
-
- s = requests.session()
- s.config['keep_alive'] = False
-
-
-Asynchronous Requests
-----------------------
-
-
-``requests.async`` has been removed from requests and is now its own repository named `GRequests <https://github.com/kennethreitz/grequests>`_.
+Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set ``stream`` to ``False`` or read the ``content`` property of the ``Response`` object.
Event Hooks
@@ -166,15 +152,6 @@ the request process, or signal event handling.
Available hooks:
-``args``:
- A dictionary of the arguments being sent to Request().
-
-``pre_request``:
- The Request object, directly before being sent.
-
-``post_request``:
- The Request object, directly after being sent.
-
``response``:
The response generated from a Request.
@@ -183,15 +160,15 @@ You can assign a hook function on a per-request basis by passing a
``{hook_name: callback_function}`` dictionary to the ``hooks`` request
parameter::
- hooks=dict(args=print_url)
+ hooks=dict(response=print_url)
That ``callback_function`` will receive a chunk of data as its first
argument.
::
- def print_url(args):
- print args['url']
+ def print_url(r):
+ print(r.url)
If an error occurs while executing your callback, a warning is given.
@@ -201,41 +178,10 @@ anything, nothing else is effected.
Let's print some request method arguments at runtime::
- >>> requests.get('http://httpbin.org', hooks=dict(args=print_url))
+ >>> requests.get('http://httpbin.org', hooks=dict(response=print_url))
http://httpbin.org
<Response [200]>
-Let's hijack some arguments this time with a new callback::
-
- def hack_headers(args):
- if args.get('headers') is None:
- args['headers'] = dict()
-
- args['headers'].update({'X-Testing': 'True'})
-
- return args
-
- hooks = dict(args=hack_headers)
- headers = dict(yo=dawg)
-
-And give it a try::
-
- >>> requests.get('http://httpbin.org/headers', hooks=hooks, headers=headers)
- {
- "headers": {
- "Content-Length": "",
- "Accept-Encoding": "gzip",
- "Yo": "dawg",
- "X-Forwarded-For": "::ffff:24.127.96.129",
- "Connection": "close",
- "User-Agent": "python-requests.org",
- "Host": "httpbin.org",
- "X-Testing": "True",
- "X-Forwarded-Protocol": "",
- "Content-Type": ""
- }
- }
-
Custom Authentication
---------------------
@@ -283,27 +229,13 @@ To use the Twitter Streaming API to track the keyword "requests"::
import json
r = requests.post('https://stream.twitter.com/1/statuses/filter.json',
- data={'track': 'requests'}, auth=('username', 'password'), prefetch=False)
+ data={'track': 'requests'}, auth=('username', 'password'), stream=True)
for line in r.iter_lines():
if line: # filter out keep-alive new lines
print json.loads(line)
-Verbose Logging
----------------
-
-If you want to get a good look at what HTTP requests are being sent
-by your application, you can turn on verbose logging.
-
-To do so, just configure Requests with a stream to write to::
-
- >>> my_config = {'verbose': sys.stderr}
- >>> requests.get('http://httpbin.org/headers', config=my_config)
- 2011-08-17T03:04:23.380175 GET http://httpbin.org/headers
- <Response [200]>
-
-
Proxies
-------
@@ -349,7 +281,7 @@ Encodings
When you receive a response, Requests makes a guess at the encoding to use for
decoding the response when you call the ``Response.text`` method. Requests
will first check for an encoding in the HTTP header, and if none is present,
-will use `chardet <http://pypi.python.org/pypi/chardet>`_ to attempt to guess
+will use `charade <http://pypi.python.org/pypi/charade>`_ to attempt to guess
the encoding.
The only time Requests will not do this is if no explicit charset is present
View
4 docs/user/quickstart.rst
@@ -139,9 +139,9 @@ Raw Response Content
In the rare case that you'd like to get the raw socket response from the
server, you can access ``r.raw``. If you want to do this, make sure you set
-``prefetch=False`` in your initial request. Once you do, you can do this::
+``stream=True`` in your initial request. Once you do, you can do this::
- >>> r = requests.get('https:/github.com/timeline.json', prefetch=False)
+ >>> r = requests.get('https:/github.com/timeline.json', stream=True)
>>> r.raw
<requests.packages.urllib3.response.HTTPResponse object at 0x101194810>
>>> r.raw.read(10)
View
11 requests/models.py
@@ -528,7 +528,6 @@ def text(self):
return content
- @property

Is there a reason why this got removed?

@sigmavirus24 Collaborator

A very specific one. It wasn't good design in the first place. Attributes shouldn't through errors on decoding JSON, but not throwing an error would be misleading and might make a user think there was no JSON. As a method, it is perfectly legal for it through an exception. And that's how it should be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
def json(self):
"""Returns the json-encoded content of a response, if any."""
@@ -539,14 +538,8 @@ def json(self):
# a best guess).
encoding = guess_json_utf(self.content)
if encoding is not None:
- try:
- return json.loads(self.content.decode(encoding))
- except (ValueError, UnicodeDecodeError):
- pass
- try:
- return json.loads(self.text or self.content)
- except ValueError:
- return None
+ return json.loads(self.content.decode(encoding))
+ return json.loads(self.text or self.content)

This now causes non-RFC compliant JSON encoded to something other than a UTF codec and no charset set in the response, to fail. guess_json_utf() will still return UTF-8 for most codecs (latin-1, latin-2, etc.). Removing the first try: block now breaks handling such responses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@property
def links(self):
View
7 requests/sessions.py
@@ -62,7 +62,7 @@ def merge_kwargs(local_kwarg, default_kwarg):
class SessionRedirectMixin(object):
- def resolve_redirects(self, resp, req, stream=False, timeout=None, verify=True, cert=None):
+ def resolve_redirects(self, resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Receives a Response. Returns a generator of Responses."""
i = 0
@@ -125,7 +125,8 @@ def resolve_redirects(self, resp, req, stream=False, timeout=None, verify=True,
stream=stream,
timeout=timeout,
verify=verify,
- cert=cert
+ cert=cert,
+ proxies=proxies
)
i += 1
@@ -255,7 +256,7 @@ def request(self, method, url,
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
# Redirect resolving generator.
- gen = self.resolve_redirects(resp, req, stream, timeout, verify, cert)
+ gen = self.resolve_redirects(resp, req, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
# Resolve redirects if allowed.
history = [r for r in gen] if allow_redirects else []
@spezifanta

Is there a reason why this got removed?

@sigmavirus24

A very specific one. It wasn't good design in the first place. Attributes shouldn't through errors on decoding JSON, but not throwing an error would be misleading and might make a user think there was no JSON. As a method, it is perfectly legal for it through an exception. And that's how it should be.

@mjpieters

This now causes non-RFC compliant JSON encoded to something other than a UTF codec and no charset set in the response, to fail. guess_json_utf() will still return UTF-8 for most codecs (latin-1, latin-2, etc.). Removing the first try: block now breaks handling such responses.

Please sign in to comment.
Something went wrong with that request. Please try again.