Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

state_machine: reduce memory usage by about 200 MiB #1429

Merged
merged 1 commit into from
Jan 19, 2024

Commits on Jan 16, 2024

  1. state_machine: reduce memory usage by about 200 MiB

    This one is tricky! The big picture here is that we have a cache of
    objects, which is a normal cache with arbitrary eviction policy.
    
    However, we want to maintain an invariant --- all objects touched by a
    bar of events must not be evicted during this bar.
    
    To achieve that, we place a stash below the cache. The job of a stash is
    to catch all objects that fall out from the cache inside a single bar
    (between bars, the stash is reset).
    
    What's the size of the stash that we need?
    
    The conservative estimate is the number of queries for the cache.
    That is, inserts + lookups, and that is, using the old logic,
    
        @as(u32, ObjectTree.Table.value_count_max) +
            (options.prefetch_entries_max * constants.lsm_batch_multiple)
    
    The insight of this commit is that a lookup and an insert _for the same
    key_ are double counted that way.
    
    In other words, what we are interested in is not the amount of queries
    to the cache overall, but the amount of _different keys_ the queries
    touch.
    
    And for most of operations, we are actually going to update exactly the
    keys we've prefetched.
    
    The three exceptions are:
    
    - lookup transfers
    - lookup accounts
    - fetching dependant transfer for posting/voiding
    matklad committed Jan 16, 2024
    Configuration menu
    Copy the full SHA
    a3eb0a7 View commit details
    Browse the repository at this point in the history