You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
assignee='https://github.com/ethanfurman'closed_at=Nonecreated_at=<Date2021-10-08.23:52:33.688>labels= ['3.11', '3.10', 'performance']
title='Enum creation non-linear in the number of values'updated_at=<Date2021-10-15.16:07:46.518>user='https://github.com/olliemath'
Creating large enums takes a significant amount of time. Moreover this appears to be nonlinear in the number of entries in the enum. Locally, importing a single python file and taking this to the extreme:
1000 entries - 0.058s
10000 entries - 4.327s
This is partially addressed by https://bugs.python.org/issue38659 and I can confirm that using @_simple_enum does not have this problem. But it seems like that API is only intended for internal use and the 'happy path' for user-defined enums is still not good.
Note that it is not simply parsing the file / creating the instances, it is to do with the cardinality. Creating 100 enums with 100 entries each is far faster than a single 10000 entry enum.
Over at PyPy arigo authored and we applied this fix to the as-yet unreleased pypy3.8. Note that it cannot be applied directly to CPython as sets are not ordered. Does the fix break anything? Tests in Lib/test/test_enum.py passed after applying it.