Skip to content

Latest commit

 

History

History
1461 lines (917 loc) · 52.6 KB

unicode.rst

File metadata and controls

1461 lines (917 loc) · 52.6 KB

c

Unicode Objects and Codecs

Marc-André Lemburg <mal@lemburg.com>

Georg Brandl <georg@python.org>

Unicode Objects

Since the implementation of 393 in Python 3.3, Unicode objects internally use a variety of representations, in order to allow handling the complete range of Unicode characters while staying memory efficient. There are special cases for strings where all code points are below 128, 256, or 65536; otherwise, code points must be below 1114112 (which is the full Unicode range).

UTF-8 representation is created on demand and cached in the Unicode object.

Note

The :cPy_UNICODE representation has been removed since Python 3.12 with deprecated APIs. See 623 for more information.

Unicode Type

These are the basic Unicode object types used for the Unicode implementation in Python:

These types are typedefs for unsigned integer types wide enough to contain characters of 32 bits, 16 bits and 8 bits, respectively. When dealing with single Unicode characters, use :cPy_UCS4.

3.3

These subtypes of :cPyObject represent a Python Unicode object. In almost all cases, they shouldn't be used directly, since all API functions that deal with Unicode objects take and return :cPyObject pointers.

3.3

The following APIs are C macros and static inlined functions for fast checks and access to internal read-only data of Unicode objects:

Return a pointer to the canonical representation cast to UCS1, UCS2 or UCS4 integer types for direct character access. No checks are performed if the canonical representation has the correct character size; use :cPyUnicode_KIND to select the right function.

3.3

Return values of the :cPyUnicode_KIND macro.

3.3

3.12 PyUnicode_WCHAR_KIND has been removed.

Write into a canonical representation data (as obtained with :cPyUnicode_DATA). This function performs no sanity checks, and is intended for usage in loops. The caller should cache the kind value and data pointer as obtained from other calls. index is the index in the string (starts at 0) and value is the new code point value which should be written to that location.

3.3

Read a code point from a canonical representation data (as obtained with :cPyUnicode_DATA). No checks or ready calls are performed.

3.3

Unicode Character Properties

Unicode provides many different character properties. The most often needed ones are available through these macros which are mapped to C functions depending on the Python configuration.

These APIs can be used for fast direct character conversions:

These APIs can be used to work with surrogates:

Creating and accessing Unicode strings

To create Unicode objects and access their basic sequence properties, use these APIs:

Create a new Unicode object with the given kind (possible values are :cPyUnicode_1BYTE_KIND etc., as returned by :cPyUnicode_KIND). The buffer must point to an array of size units of 1, 2 or 4 bytes per character, as given by the kind.

If necessary, the input buffer is copied and transformed into the canonical representation. For example, if the buffer is a UCS4 string (:cPyUnicode_4BYTE_KIND) and it consists only of codepoints in the UCS1 range, it will be transformed into UCS1 (:cPyUnicode_1BYTE_KIND).

3.3

Decode an encoded object obj to a Unicode object.

bytes, bytearray and other bytes-like objects <bytes-like object> are decoded according to the given encoding and using the error handling defined by errors. Both can be NULL to have the interface use the default values (see builtincodecs for details).

All other objects, including Unicode objects, cause a TypeError to be set.

The API returns NULL if there was an error. The caller is responsible for decref'ing the returned objects.

Copy characters from one Unicode object into another. This function performs character conversion when necessary and falls back to :cmemcpy if possible. Returns -1 and sets an exception on error, otherwise returns the number of copied characters.

3.3

Fill a string with a character: write fill_char into unicode[start:start+length].

Fail if fill_char is bigger than the string maximum character, or if the string has more than 1 reference.

Return the number of written character, or return -1 and raise an exception on error.

3.3

Write a character to a string. The string must have been created through :cPyUnicode_New. Since Unicode strings are supposed to be immutable, the string must not be shared, or have been hashed yet.

This function checks that unicode is a Unicode object, that the index is not out of bounds, and that the object can be modified safely (i.e. that it its reference count is one).

3.3

Return a substring of str, from character index start (included) to character index end (excluded). Negative indices are not supported.

3.3

Copy the string u into a UCS4 buffer, including a null character, if copy_null is set. Returns NULL and sets an exception on error (in particular, a SystemError if buflen is smaller than the length of u). buffer is returned on success.

3.3

Locale Encoding

The current locale encoding can be used to decode text from the operating system.

Decode a string from UTF-8 on Android and VxWorks, or from the current locale encoding on other platforms. The supported error handlers are "strict" and "surrogateescape" (383). The decoder uses "strict" error handler if errors is NULL. str must end with a null character but cannot contain embedded null characters.

Use :cPyUnicode_DecodeFSDefaultAndSize to decode a string from the filesystem encoding and error handler.

This function ignores the Python UTF-8 Mode <utf8-mode>.

The :cPy_DecodeLocale function.

3.3

3.7 The function now also uses the current locale encoding for the surrogateescape error handler, except on Android. Previously, :cPy_DecodeLocale was used for the surrogateescape, and the current locale encoding was used for strict.

File System Encoding

Functions encoding to and decoding from the filesystem encoding and error handler (383 and 529).

To encode file names to bytes during argument parsing, the "O&" converter should be used, passing :cPyUnicode_FSConverter as the conversion function:

To decode file names to str during argument parsing, the "O&" converter should be used, passing :cPyUnicode_FSDecoder as the conversion function:

wchar_t Support

:cwchar_t support for platforms which support it:

Built-in Codecs

Python provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the following functions.

Many of the following APIs take two arguments encoding and errors, and they have the same semantics as the ones of the built-in str string object constructor.

Setting encoding to NULL causes the default encoding to be used which is UTF-8. The file system calls should use :cPyUnicode_FSConverter for encoding file names. This uses the filesystem encoding and error handler internally.

Error handling is set by errors which may also be set to NULL meaning to use the default handling defined for the codec. Default error handling for all built-in codecs is "strict" (ValueError is raised).

The codecs all use a similar interface. Only deviations from the following generic ones are documented for simplicity.

Generic Codecs

These are the generic codec APIs:

Create a Unicode object by decoding size bytes of the encoded string s. encoding and errors have the same meaning as the parameters of the same name in the str built-in function. The codec to be used is looked up using the Python codec registry. Return NULL if an exception was raised by the codec.

Encode a Unicode object and return the result as Python bytes object. encoding and errors have the same meaning as the parameters of the same name in the Unicode ~str.encode method. The codec to be used is looked up using the Python codec registry. Return NULL if an exception was raised by the codec.

UTF-8 Codecs

These are the UTF-8 codec APIs:

If consumed is NULL, behave like :cPyUnicode_DecodeUTF8. If consumed is not NULL, trailing incomplete UTF-8 byte sequences will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.

UTF-32 Codecs

These are the UTF-32 codec APIs:

Decode size bytes from a UTF-32 encoded buffer string and return the corresponding Unicode object. errors (if non-NULL) defines the error handling. It defaults to "strict".

If byteorder is non-NULL, the decoder starts decoding using the given byte order:

*byteorder == -1: little endian
*byteorder == 0:  native order
*byteorder == 1:  big endian

If *byteorder is zero, and the first four bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If *byteorder is -1 or 1, any byte order mark is copied to the output.

After completion, *byteorder is set to the current byte order at the end of input data.

If byteorder is NULL, the codec starts in native order mode.

Return NULL if an exception was raised by the codec.

If consumed is NULL, behave like :cPyUnicode_DecodeUTF32. If consumed is not NULL, :cPyUnicode_DecodeUTF32Stateful will not treat trailing incomplete UTF-32 byte sequences (such as a number of bytes not divisible by four) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.

UTF-16 Codecs

These are the UTF-16 codec APIs:

Decode size bytes from a UTF-16 encoded buffer string and return the corresponding Unicode object. errors (if non-NULL) defines the error handling. It defaults to "strict".

If byteorder is non-NULL, the decoder starts decoding using the given byte order:

*byteorder == -1: little endian
*byteorder == 0:  native order
*byteorder == 1:  big endian

If *byteorder is zero, and the first two bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If *byteorder is -1 or 1, any byte order mark is copied to the output (where it will result in either a \ufeff or a \ufffe character).

After completion, *byteorder is set to the current byte order at the end of input data.

If byteorder is NULL, the codec starts in native order mode.

Return NULL if an exception was raised by the codec.

If consumed is NULL, behave like :cPyUnicode_DecodeUTF16. If consumed is not NULL, :cPyUnicode_DecodeUTF16Stateful will not treat trailing incomplete UTF-16 byte sequences (such as an odd number of bytes or a split surrogate pair) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.

UTF-7 Codecs

These are the UTF-7 codec APIs:

If consumed is NULL, behave like :cPyUnicode_DecodeUTF7. If consumed is not NULL, trailing incomplete UTF-7 base-64 sections will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.

Unicode-Escape Codecs

These are the "Unicode Escape" codec APIs:

Create a Unicode object by decoding size bytes of the Unicode-Escape encoded string s. Return NULL if an exception was raised by the codec.

Raw-Unicode-Escape Codecs

These are the "Raw Unicode Escape" codec APIs:

Create a Unicode object by decoding size bytes of the Raw-Unicode-Escape encoded string s. Return NULL if an exception was raised by the codec.

Latin-1 Codecs

These are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.

ASCII Codecs

These are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.

Character Map Codecs

This codec is special in that it can be used to implement many different codecs (and this is in fact what was done to obtain most of the standard codecs included in the encodings package). The codec uses mappings to encode and decode characters. The mapping objects provided must support the __getitem__ mapping interface; dictionaries and sequences work well.

These are the mapping codec APIs:

Create a Unicode object by decoding size bytes of the encoded string s using the given mapping object. Return NULL if an exception was raised by the codec.

If mapping is NULL, Latin-1 decoding will be applied. Else mapping must map bytes ordinals (integers in the range from 0 to 255) to Unicode strings, integers (which are then interpreted as Unicode ordinals) or None. Unmapped data bytes -- ones which cause a LookupError, as well as ones which get mapped to None, 0xFFFE or '\ufffe', are treated as undefined mappings and cause an error.

The following codec API is special in that maps Unicode to Unicode.

MBCS codecs for Windows

These are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.

If consumed is NULL, behave like :cPyUnicode_DecodeMBCS. If consumed is not NULL, :cPyUnicode_DecodeMBCSStateful will not decode trailing lead byte and the number of bytes that have been decoded will be stored in consumed.

Methods & Slots

Methods and Slot Functions

The following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the descriptions) and return Unicode objects or integers as appropriate.

They all return NULL or -1 if an exception occurs.

Return 1 if substr matches str[start:end] at the given tail end (direction == -1 means to do a prefix match, direction == 1 a suffix match), 0 otherwise. Return -1 if an error occurred.

Return the first position of substr in str[start:end] using the given direction (direction == 1 means to do a forward search, direction == -1 a backward search). The return value is the index of the first match; a value of -1 indicates that no match was found, and -2 indicates that an error occurred and an exception has been set.

Return the first position of the character ch in str[start:end] using the given direction (direction == 1 means to do a forward search, direction == -1 a backward search). The return value is the index of the first match; a value of -1 indicates that no match was found, and -2 indicates that an error occurred and an exception has been set.

3.3

3.7 start and end are now adjusted to behave like str[start:end].

Return the number of non-overlapping occurrences of substr in str[start:end]. Return -1 if an error occurred.

Replace at most maxcount occurrences of substr in str with replstr and return the resulting Unicode object. maxcount == -1 means replace all occurrences.