-
-
Notifications
You must be signed in to change notification settings - Fork 29.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enum creation non-linear in the number of values #89580
Comments
Creating large enums takes a significant amount of time. Moreover this appears to be nonlinear in the number of entries in the enum. Locally, importing a single python file and taking this to the extreme: 1000 entries - 0.058s This is partially addressed by https://bugs.python.org/issue38659 and I can confirm that using Note that it is not simply parsing the file / creating the instances, it is to do with the cardinality. Creating 100 enums with 100 entries each is far faster than a single 10000 entry enum. |
The timing is clearly quadratic: number of attributes time Pressing Ctrl-C in the middle of the execution of the largest examples points directly to the cause: when we consider the next attribute, we loop over all previous ones at enum.py:238. |
Over at PyPy arigo authored and we applied this fix to the as-yet unreleased pypy3.8. Note that it cannot be applied directly to CPython as sets are not ordered. Does the fix break anything? Tests in Lib/test/test_enum.py passed after applying it. |
I fixed the reliance of set being insertion ordered in pypy and opened a pull request. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: