Skip to content

Commit

Permalink
Fix typos (#1968)
Browse files Browse the repository at this point in the history
  • Loading branch information
kianmeng committed Dec 14, 2021
1 parent 3c5884e commit 82ba15b
Show file tree
Hide file tree
Showing 12 changed files with 24 additions and 24 deletions.
8 changes: 4 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ finally:
### Fixed

* `response.iter_bytes()` no longer raises a ValueError when called on a response with no content. (Pull #1827)
* The `'wsgi.error'` configuration now defaults to `sys.stderr`, and is corrected to be a `TextIO` interface, not a `BytesIO` interface. Additionally, the WSGITransport now accepts a `wsgi_error` confguration. (Pull #1828)
* The `'wsgi.error'` configuration now defaults to `sys.stderr`, and is corrected to be a `TextIO` interface, not a `BytesIO` interface. Additionally, the WSGITransport now accepts a `wsgi_error` configuration. (Pull #1828)
* Follow the WSGI spec by properly closing the iterable returned by the application. (Pull #1830)

## 0.19.0 (19th August, 2021)
Expand Down Expand Up @@ -347,7 +347,7 @@ The following API changes have been issuing deprecation warnings since 0.17.0 on

The 0.14 release includes a range of improvements to the public API, intended on preparing for our upcoming 1.0 release.

* Our HTTP/2 support is now fully optional. **You now need to use `pip install httpx[http2]` if you want to include the HTTP/2 dependancies.**
* Our HTTP/2 support is now fully optional. **You now need to use `pip install httpx[http2]` if you want to include the HTTP/2 dependencies.**
* Our HSTS support has now been removed. Rewriting URLs from `http` to `https` if the host is on the HSTS list can be beneficial in avoiding roundtrips to incorrectly formed URLs, but on balance we've decided to remove this feature, on the principle of least surprise. Most programmatic clients do not include HSTS support, and for now we're opting to remove our support for it.
* Our exception hierarchy has been overhauled. Most users will want to stick with their existing `httpx.HTTPError` usage, but we've got a clearer overall structure now. See https://www.python-httpx.org/exceptions/ for more details.

Expand Down Expand Up @@ -720,7 +720,7 @@ importing modules within the package.
- The SSL configuration settings of `verify`, `cert`, and `trust_env` now raise warnings if used per-request when using a Client instance. They should always be set on the Client instance itself. (Pull #597)
- Use plain strings "TUNNEL_ONLY" or "FORWARD_ONLY" on the HTTPProxy `proxy_mode` argument. The `HTTPProxyMode` enum still exists, but its usage will raise warnings. (#610)
- Pool timeouts are now on the timeout configuration, not the pool limits configuration. (Pull #563)
- The timeout configuration is now named `httpx.Timeout(...)`, not `httpx.TimeoutConfig(...)`. The old version currently remains as a synonym for backwards compatability. (Pull #591)
- The timeout configuration is now named `httpx.Timeout(...)`, not `httpx.TimeoutConfig(...)`. The old version currently remains as a synonym for backwards compatibility. (Pull #591)

---

Expand Down Expand Up @@ -849,7 +849,7 @@ importing modules within the package.
- Switch IDNA encoding from IDNA 2003 to IDNA 2008. (Pull #161)
- Expose base classes for alternate concurrency backends. (Pull #178)
- Improve Multipart parameter encoding. (Pull #167)
- Add the `headers` proeprty to `BaseClient`. (Pull #159)
- Add the `headers` property to `BaseClient`. (Pull #159)
- Add support for Google's `brotli` library. (Pull #156)
- Remove deprecated TLS versions (TLSv1 and TLSv1.1) from default `SSLConfig`. (Pull #155)
- Fix `URL.join(...)` to work similarly to RFC 3986 URL joining. (Pull #144)
Expand Down
2 changes: 1 addition & 1 deletion docs/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ This can bring **significant performance improvements** compared to using the to

`Client` instances also support features that aren't available at the top-level API, such as:

- Cookie persistance across requests.
- Cookie persistence across requests.
- Applying configuration across all outgoing requests.
- Sending requests through HTTP proxies.
- Using [HTTP/2](http2.md).
Expand Down
4 changes: 2 additions & 2 deletions docs/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ You can still enable behaviour to automatically follow redirects, but you need t
do so explicitly...

```python
respose = client.get(url, follow_redirects=True)
response = client.get(url, follow_redirects=True)
```

Or else instantiate a client, with redirect following enabled by default...
Expand Down Expand Up @@ -97,7 +97,7 @@ opened in text mode.

## Content encoding

HTTPX uses `utf-8` for encoding `str` request bodies. For example, when using `content=<str>` the request body will be encoded to `utf-8` before being sent over the wire. This differs from Requests which uses `latin1`. If you need an explicit encoding, pass encoded bytes explictly, e.g. `content=<str>.encode("latin1")`.
HTTPX uses `utf-8` for encoding `str` request bodies. For example, when using `content=<str>` the request body will be encoded to `utf-8` before being sent over the wire. This differs from Requests which uses `latin1`. If you need an explicit encoding, pass encoded bytes explicitly, e.g. `content=<str>.encode("latin1")`.
For response bodies, assuming the server didn't send an explicit encoding then HTTPX will do its best to figure out an appropriate encoding. HTTPX makes a guess at the encoding to use for decoding the response using `charset_normalizer`. Fallback to that or any content with less than 32 octets will be decoded using `utf-8` with the `error="replace"` decoder strategy.

## Cookies
Expand Down
4 changes: 2 additions & 2 deletions httpx/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1275,7 +1275,7 @@ def __exit__(

def __del__(self) -> None:
# We use 'getattr' here, to manage the case where '__del__()' is called
# on a partically initiallized instance that raised an exception during
# on a partially initiallized instance that raised an exception during
# the call to '__init__()'.
if getattr(self, "_state", None) == ClientState.OPENED: # noqa: B009
self.close()
Expand Down Expand Up @@ -1986,7 +1986,7 @@ async def __aexit__(

def __del__(self) -> None:
# We use 'getattr' here, to manage the case where '__del__()' is called
# on a partically initiallized instance that raised an exception during
# on a partially initiallized instance that raised an exception during
# the call to '__init__()'.
if getattr(self, "_state", None) == ClientState.OPENED: # noqa: B009
# Unlike the sync case, we cannot silently close the client when
Expand Down
2 changes: 1 addition & 1 deletion httpx/_content.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ def encode_request(
returning a two-tuple of (<headers>, <stream>).
"""
if data is not None and not isinstance(data, dict):
# We prefer to seperate `content=<bytes|str|byte iterator|bytes aiterator>`
# We prefer to separate `content=<bytes|str|byte iterator|bytes aiterator>`
# for raw request content, and `data=<form data>` for url encoded or
# multipart form content.
#
Expand Down
16 changes: 8 additions & 8 deletions httpx/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ def query(self) -> bytes:
"""
The URL query string, as raw bytes, excluding the leading b"?".
This is neccessarily a bytewise interface, because we cannot
This is necessarily a bytewise interface, because we cannot
perform URL decoding of this representation until we've parsed
the keys and values into a QueryParams instance.
Expand Down Expand Up @@ -465,7 +465,7 @@ def copy_with(self, **kwargs: typing.Any) -> "URL":
port = kwargs.pop("port", self.port)

if host and ":" in host and host[0] != "[":
# IPv6 addresses need to be escaped within sqaure brackets.
# IPv6 addresses need to be escaped within square brackets.
host = f"[{host}]"

kwargs["netloc"] = (
Expand Down Expand Up @@ -907,7 +907,7 @@ def values(self) -> typing.ValuesView[str]:
def items(self) -> typing.ItemsView[str, str]:
"""
Return `(key, value)` items of headers. Concatenate headers
into a single comma seperated value when a key occurs multiple times.
into a single comma separated value when a key occurs multiple times.
"""
values_dict: typing.Dict[str, str] = {}
for _, key, value in self._list:
Expand All @@ -922,8 +922,8 @@ def items(self) -> typing.ItemsView[str, str]:
def multi_items(self) -> typing.List[typing.Tuple[str, str]]:
"""
Return a list of `(key, value)` pairs of headers. Allow multiple
occurences of the same key without concatenating into a single
comma seperated value.
occurrences of the same key without concatenating into a single
comma separated value.
"""
return [
(key.decode(self.encoding), value.decode(self.encoding))
Expand All @@ -932,7 +932,7 @@ def multi_items(self) -> typing.List[typing.Tuple[str, str]]:

def get(self, key: str, default: typing.Any = None) -> typing.Any:
"""
Return a header value. If multiple occurences of the header occur
Return a header value. If multiple occurrences of the header occur
then concatenate them together with commas.
"""
try:
Expand All @@ -943,7 +943,7 @@ def get(self, key: str, default: typing.Any = None) -> typing.Any:
def get_list(self, key: str, split_commas: bool = False) -> typing.List[str]:
"""
Return a list of all header values for a given key.
If `split_commas=True` is passed, then any comma seperated header
If `split_commas=True` is passed, then any comma separated header
values are split into multiple return strings.
"""
get_header_key = key.lower().encode(self.encoding)
Expand Down Expand Up @@ -1365,7 +1365,7 @@ def charset_encoding(self) -> typing.Optional[str]:
@property
def apparent_encoding(self) -> typing.Optional[str]:
"""
Return the encoding, as detemined by `charset_normalizer`.
Return the encoding, as determined by `charset_normalizer`.
"""
content = getattr(self, "_content", b"")
if len(content) < 32:
Expand Down
2 changes: 1 addition & 1 deletion httpx/_transports/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def handle_request(self, request: Request) -> Response:
At this layer of API we're simply using plain primitives. No `Request` or
`Response` models, no fancy `URL` or `Header` handling. This strict point
of cut-off provides a clear design seperation between the HTTPX API,
of cut-off provides a clear design separation between the HTTPX API,
and the low-level network handling.
Developers shouldn't typically ever need to call into this API directly,
Expand Down
2 changes: 1 addition & 1 deletion httpx/_transports/default.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
Example usages...
# Disable HTTP/2 on a single specfic domain.
# Disable HTTP/2 on a single specific domain.
mounts = {
"all://": httpx.HTTPTransport(http2=True),
"all://*example.org": httpx.HTTPTransport()
Expand Down
2 changes: 1 addition & 1 deletion httpx/_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ def close(self) -> None:

def read(self) -> bytes:
"""
Simple cases can use `.read()` as a convience method for consuming
Simple cases can use `.read()` as a convenience method for consuming
the entire stream and then closing it.
Example:
Expand Down
2 changes: 1 addition & 1 deletion httpx/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ def get_environment_proxies() -> typing.Dict[str, typing.Optional[str]]:
# on how names in `NO_PROXY` are handled.
if hostname == "*":
# If NO_PROXY=* is used or if "*" occurs as any one of the comma
# seperated hostnames, then we should just bypass any information
# separated hostnames, then we should just bypass any information
# from HTTP_PROXY, HTTPS_PROXY, ALL_PROXY, and always ignore
# proxies.
return {}
Expand Down
2 changes: 1 addition & 1 deletion tests/client/test_proxies.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ def test_unsupported_proxy_scheme():
{"ALL_PROXY": "http://localhost:123", "NO_PROXY": ".example1.com"},
None,
),
# Proxied, because NO_PROXY subdomains only match if "." seperated.
# Proxied, because NO_PROXY subdomains only match if "." separated.
(
"https://www.example2.com",
{"ALL_PROXY": "http://localhost:123", "NO_PROXY": "ample2.com"},
Expand Down
2 changes: 1 addition & 1 deletion tests/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def test_get_netrc_login():

def test_get_netrc_unknown():
netrc_info = NetRCInfo([str(FIXTURES_DIR / ".netrc")])
assert netrc_info.get_credentials("nonexistant.org") is None
assert netrc_info.get_credentials("nonexistent.org") is None


@pytest.mark.parametrize(
Expand Down

0 comments on commit 82ba15b

Please sign in to comment.