Lazily create dictionaries for plain Python objects #89503
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
assignee = 'https://github.com/markshannon' closed_at = <Date 2021-10-20.10:54:07.424> created_at = <Date 2021-10-01.11:21:17.552> labels = ['interpreter-core', 'type-feature', '3.11'] title = 'Lazily create dictionaries for plain Python objects' updated_at = <Date 2021-11-30.23:14:06.431> user = 'https://github.com/markshannon'
activity = <Date 2021-11-30.23:14:06.431> actor = 'iritkatriel' assignee = 'Mark.Shannon' closed = True closed_date = <Date 2021-10-20.10:54:07.424> closer = 'Mark.Shannon' components = ['Interpreter Core'] creation = <Date 2021-10-01.11:21:17.552> creator = 'Mark.Shannon' dependencies =  files =  hgrepos =  issue_num = 45340 keywords = ['patch'] message_count = 7.0 messages = ['403010', '403492', '403494', '403737', '403830', '404424', '407407'] nosy_count = 5.0 nosy_names = ['methane', 'Mark.Shannon', 'josh.r', 'corona10', 'iritkatriel'] pr_nums = ['28802'] priority = 'normal' resolution = 'fixed' stage = 'resolved' status = 'closed' superseder = None type = 'enhancement' url = 'https://bugs.python.org/issue45340' versions = ['Python 3.11']
The text was updated successfully, but these errors were encountered:
A "Normal" Python objects is conceptually just a pair of pointers, one to the class, and one to the dictionary.
With shared keys, the dictionary is redundant as it is no more than a pair of pointers, one to the keys and one to the values.
By adding a pointer to the values to the object, or embedding the values in the object, and fetching the keys via the class, we can avoid creating a dictionary for many objects.
See faster-cpython/ideas#72 for more details.
Hmm... Key-sharing dictionaries were accepted largely without question because they didn't harm code that broke them (said code gained nothing, but lost nothing either), and provided a significant benefit. Specifically:
The initial version of this worsens case #2; you'd have to convert to key-sharing dicts, and possibly to unshared dicts a moment later, if the set of attributes is changed. And when it happens, you'd be paying the cost of the now defunct values pointer storage for the life of each instance (admittedly a small cost).
But the final proposal compounds this, because the penalty for lazy attribute creation (directly, or dynamically by modifying via vars()/dict) is now a per-instance cost of n pointers (one for each value).
The CPython codebase rarely uses lazy attribute creation, but AFAIK there is no official recommendation to avoid it (not in PEP-8, not in the official tutorial, not even in PEP-412 which introduced Key-Sharing Dictionaries). Imposing a fairly significant penalty on people who aren't even violating language recommendations, let alone language rules, seems harsh.
I'm not against this initial version (one pointer wasted isn't so bad), but the additional waste in the final version worries me greatly.
Beyond the waste, I'm worried how you'd handle the creation of the first instance of such a class; you'd need to allocate and initialize an instance before you know how many values to tack on to the object. Would the first instance use a real dict during the first __init__ call that it would use to realloc the instance (and size all future instances) at the end of __init__? Or would it be realloc-ing for each and every attribute creation? In either case, threading issues seem like a problem.
If the gains are really impressive, might still be worth it. But I'm just worried that we'll make the language penalize people who don't know to avoid lazy attribute creation. And the complexity of this layered:
approach makes me worry it will allow subtle bugs in key-sharing dicts to go unnoticed (because so little code would still use them).
Hmm... And there's one other issue (that wouldn't affect people until they actually start worrying about memory overhead). Right now, if you want to determine the overhead of an instance, the options are:
This change would mean even checking if something using this setup has a __dict__ creates one. Without additional introspection support, there's no way to tell the real memory usage of the instance without changing the memory usage (for the worse).
I'm not really following the details of what you are saying.
You claim "Key-sharing dictionaries were accepted largely without question because they didn't harm code that broke them".
This issue, proposes the same thing: less memory used, no slower or a bit faster.
If you are curious about how the first few instances of a class are handled, it is described here:
Lazy attribute is not an issue here. How well keys are shared across instances depends on the dictionary implementation and was improved by #28520
It would be helpful if you could give specific examples where you think this change would use more memory or be slower.