YJIT: implement cache for recently encoded/decoded contexts #10938
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The
decoded_from
and one-entry caches seemed slightly hacky, particularly given thatdecoded_from
broke equality comparison onContext
objects.Since we know from Kokubun's previous work that there is significant duplication among contexts, I thought it could make sense to implement a fixed-size cache of recently encoded/decoded contexts. This turns out to work quite well.
master branch (with var-len ctx and single-entry cache):
After, cache size 128:
After, cache size 256:
After, cache size 512:
So this is pretty cool. With minimal changes we can get some amount of deduplication of contexts. If we keep making the cache bigger, at some point, memory usage starts going up again because of the size of the cache itself. This seems to happen at cache size 512 for lobsters. That being said, I'd be inclined to keep a size of 512 because the bigger the app, the less the cache overhead matters.