[ fix ] Exponentially reduce memory consumption for elab scripts running #3194
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
It's been noticed for a long time that memory consumption of the compiler executing elaborator scripts somewhat exponential on the reached depth of monadic bindings in the
Elab
monad. At the same time, it's been noticed that code likeact1 *> act2 <* act3
uses much more memory comparing to the codedo {act1; r <- act2; act3; pure r}
when it is somewhere deep inside the script. My guess is that this is because of a closures preserving problem, similar to runtime problems of closure allocation.Since both
map
and<*>
were implemented throughBind
in theElab
monad, I thought that running them inCore
is much more efficient (becauseCore
is effectively a glamourousIO
, which is run optimised, with no cost for bindings). I tried this, but I didn't suspect how efficient this would be! My guess is that reduction of binds-depth severely reduces unneeded job of closures preserving in the evaluator.Currently, one of my big elaborator scripts, which runs on the current
main
3 hours 17 minutes taking 16.5 Gb of memory, runs just 2 hours 24 minutes using 10 Gb with this patch (both runs were on the same machine using the patched GC from Edwin). More impressively, another elaborator script that used more than 360 Gb of memory unabling to finish due to out of memory error, successfully finishes using just 10 Gb of memory with the change from this PR.Those test are large and use a lot of dependencies, and it's really hard to make a small test out of them (and anyway, those cases run for several hours). I didn't manage to make a small test that exploits the problem being fixed.
In the implementation in managing newly added
Map
andAp
I usedapplyToStack
function that is used in the management of theBind
constructor to apply arbitraryNF
to an abribtrary closure. Unlike theBind
case, I used the weakestEvalOpts
in order to not to reduce expressions too much (in fact, to preserve the status quo behaviour of produced expressions). I extended one exitsting test to make sure behaviour preserves in a different case too.