Description
I have been investigating causes of swapping on my system, and stumbled upon this fragment in ninja code:
Lines 223 to 233 in 03df526
Ninja is used by Android build system, and since I am compiling a lot of Android code, it's performance strongly affects usability of my system.
My work PC has 4-core CPU with max 8 threads, and the home PC has 8-core CPU with max 16 (!!) threads. Both have 8Gb of RAM.
Needless to say, ninja compilations quickly hoard all available memory and cause heavy swapping.
Right now ninja defaults to allocating CPU+2 threads, which can easily exhaust OS resources, if amount of available memory does not "match" count of CPUs. There are few other programs with this kind of default, but most of those are games, which are optimized to handle fixed assets and conserve memory. Ninja processes external data — software source code, — some of which is very memory heavy (e.g. C++). This is definitely NOT ok. If the current CPU trend continues, we will soon see consumer-targeted computers with 64+ cores. If the current RAM trend continues, most of those computers won't have matching amount of RAM.
I have seen some discussions about conserving memory, used by compilation, by dynamically monitoring memory usage. I don't personally care about that — most of my projects have predictable compilation footprint.
Instead I'd like ninja to perform some basic sanity checks and limit it's maximum parallelism, based on available system memory. If some of installed CPUs don't have at least 1Gb of RAM to each, don't count those CPUs towards default parallelism setting. This will keep number of parallel jobs roughly same for most systems with <4 CPUs as well as enterprise Xeon buildservers, while providing more reasonable default for systems with subpar RAM amount.