Skip to content

Cleaning up tokenize module imports #128223

@podrybaum

Description

@podrybaum

Bug report

Bug description:

This is not really a bug, but I'm preparing to submit a pull request for some minor modifications I made to tokenize.py, to clean up the mess of imports.

Here's what I'm talking about:

from token import *
from token import EXACT_TOKEN_TYPES
...

import token
__all__ = token.__all__ + ["tokenize", "generate_tokens", "detect_encoding",
                           "untokenize", "TokenInfo", "open", "TokenError"]
del token

This is inefficient of course, and a bit hard to follow. As nearly as I can tell, the del token was done to free up the name "token," as it was used in various places throughout the module. I think the name "tok" (which is used in other places all over) is just as descriptive, and doesn't lead to the issues with misinterpreting what's being accessed via the "token" namespace later in the module that can happen if one is quickly scanning through this code.

I'm writing some code that depends on tokenize, so I decided to go ahead and clean it up. Pull request to follow.

CPython versions tested on:

3.14, CPython main branch

Operating systems tested on:

Linux, Windows

Linked PRs

Metadata

Metadata

Assignees

No one assigned

    Labels

    stdlibStandard Library Python modules in the Lib/ directorytype-featureA feature request or enhancement

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions