-
-
Notifications
You must be signed in to change notification settings - Fork 33k
gh-63161: Fix PEP 263 support #139481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
gh-63161: Fix PEP 263 support #139481
Conversation
* Support non-UTF-8 shebang and comments if non-UTF-8 encoding is specified. * Detect decoding error in comments for UTF-8 encoding.
const char *line = tok->lineno <= 2 ? tok->buf : tok->cur; | ||
int lineno = tok->lineno <= 2 ? 1 : tok->lineno; | ||
if (!tok->encoding) { | ||
/* The default encoding is UTF-8, so make sure we don't have any | ||
non-UTF-8 sequences in it. */ | ||
if (!_PyTokenizer_ensure_utf8(line, tok, lineno)) { | ||
_PyTokenizer_error_ret(tok); | ||
return 0; | ||
} | ||
} | ||
else { | ||
PyObject *tmp = PyUnicode_Decode(line, strlen(line), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const char *line = tok->lineno <= 2 ? tok->buf : tok->cur; | |
int lineno = tok->lineno <= 2 ? 1 : tok->lineno; | |
if (!tok->encoding) { | |
/* The default encoding is UTF-8, so make sure we don't have any | |
non-UTF-8 sequences in it. */ | |
if (!_PyTokenizer_ensure_utf8(line, tok, lineno)) { | |
_PyTokenizer_error_ret(tok); | |
return 0; | |
} | |
} | |
else { | |
PyObject *tmp = PyUnicode_Decode(line, strlen(line), | |
const int is_pseudo_line = (tok->lineno <= 2); | |
const char *line = is_pseudo_line ? tok->buf : tok->cur; | |
int lineno = is_pseudo_line ? 1 : tok->lineno; | |
size_t slen = strlen(line); | |
if (slen > (size_t)PY_SSIZE_T_MAX) { | |
_PyTokenizer_error_ret(tok); | |
return 0; | |
} | |
Py_ssize_t linelen = (Py_ssize_t)slen; | |
if (!tok->encoding) { | |
/* The default encoding is UTF-8, so make sure we don't have any | |
non-UTF-8 sequences in it. */ | |
if (!_PyTokenizer_ensure_utf8(line, tok, lineno)) { | |
_PyTokenizer_error_ret(tok); | |
return 0; | |
} | |
} | |
else { | |
PyObject *tmp = PyUnicode_Decode(line, linelen, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I am not sure about the tokenizer changes, but I trust unit tests :-)
Unfortunately there was a regression which caused one of existing tests to fail. Earlier, decoding error for default (UTF-8) encoding was raised only when the tokenizer tried to decode an identifier or string literal. So you had an affected line with underscored identifier or string literal containing undecodable bytes in a traceback. Now it is raised at the beginning of parsing string or after reading a line from the file (only for first few lines). Fixing this regression was not easy. But now you have a nice line with the cursor pointing exactly to the undecodable byte in a traceback, and this works in more cases than earlier. But it did not work and still does not work if the encoding is explicitly specified. Then you get a SyntaxError without correct reference to the position of decoding error. This is a different complex issue. |
Uh oh!
There was an error while loading. Please reload this page.