-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Description
When working with some code, I regularly check minimum heap required to run it, and so far in all cases saw the pattern that such figure if bound by memory required to finish compilation. Supposedly that can be called intuitively unexpected (because intuition says that memory required for compilations should be linear to program size, while runtime memory can be unbound), would be nice to prune that out. Without redoing analysis I did previously, my suspicion is on exponential container growth policy for internal compiler data structures - exponential with big enough base (2). Arguably, that's bad heuristics for anything "micro" - it essentially says "whatever we have, even more is coming", while "optimistic" policy for micro thing would be "whatever we have, it should end soon" ;-).
So, I propose cutting down exponent base to 1.5 or maybe even 1.25, if not going to fixed-increment policy. And as these choices have known theoretical performance impact, it should be configurable - at least by factoring out a policy as a macro, like
#define ALLOC_SIZE_POLICY(old_size ) (<compute new size here>)