-
Notifications
You must be signed in to change notification settings - Fork 50
Description
As suggested by @guidobrei , an even better solution to the context memory allocation problem (see #1588) would be to have a layered evalution context, that has a reference to api, transaction, client, invocation and hook contexts.
When this layered context is queried for a value, it will search through the contexts as per the specified order, and does not need to merge any contexts, thereby greatly reducing memory churn.
An as of yet untested implementation can be found in this draft PR: #1717
According the IntelliJ Profiler running on AllocationBenchmark#main, the overall memory footprint of flag evaluations is approximately halfed (take these numbers with caution, I am not sure if this imlpementation hasn't broken anything)
Note: while browsing through the code, I discovered a few more potential opportunities to reduce our memory footprint which should be tackled separately, as this problem is the main driver of memory churn