Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set cap for total accounts data size #21604

Closed
23 of 29 tasks
brooksprumo opened this issue Dec 3, 2021 · 8 comments
Closed
23 of 29 tasks

Set cap for total accounts data size #21604

brooksprumo opened this issue Dec 3, 2021 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@brooksprumo
Copy link
Contributor

brooksprumo commented Dec 3, 2021

Overview

The data for accounts is currently unbounded and validators store this on disk (or in tempfs). The Solana minimum validator specs should factor in for the maximum amount of account data allowed.

Note, this work is a stop-gap to ensure the network doesn't fall over under extreme accounts data size usage. There is on-going work to tackle this in a long-term way, likely based on economics. Please refer to code/issues/PRs about the compute budget, and dynamic rent.

Design

Set a new constant for the maximum accounts data size

  • Set to 128 GB
  • Should this be just the on-chain data, or also the practical actual size on disk for a validator? See the Questions section below.

Make transactions fail if they exceed the max accounts data size

If a transaction attempts to allocate more data such that it would exceed the maximum allowed accounts data size, then the transaction shall fail. As far as I know, this only affects creating accounts, and reallocating accounts.

  • Need to check instructions that create new accounts/reallocate against the remaining about of account data size
  • Add Account Data Budget/Meter to InvokeContext
  • Handle CPI
  • Since transactions are processed in parallel, and we need transaction results to be deterministic, this size checking will be done at the block level.
  • Replay Stage will fail blocks that exceed the limit
  • Banking Stage will drop transactions that would exceed the limit

Track total account data size in Bank

  • Something needs to provide InvokeContext with the values for Accounts Data Budget, and probably Bank is the right spot
  • Add a accounts_data_len field to Bank
  • Update the total account data size after processing transactions
  • When making a new Bank from a parent, copy the parent bank's total account data size
  • Need to handle non-transaction operations that cause accounts to be cleaned up (i.e. go to zero lamports and thus decreases the accounts data len). This is Rent collection.
  • May need to store an initial accounts data len and a delta

Compute total account data size when loading from a snapshot

  • The first bank needs to have a correct value for its total account data size
  • When loading from a snapshot, this needs to be computed at this time
  • If loading from Genesis, I assume this can be zero
  • Alternative: store the Bank's accounts_data_len in the snapshot, and then that'll be used when loading from the snapshot.
    • What if the snapshot is from before this feature is activated? We'd still need to get the total accounts data size from somewhere.

Feature gate

Questions

  • Should this be feature-gated?
    • A: Yes
  • Deterministic seed/starting value. When this feature is enabled, we need to ensure that the whole cluster has the same value for the total accounts data size in the current bank. How should that be done?
    • Is it guaranteed that computing this value at startup is deterministic? If so, then that should work.
    • If the above does not work, an initial constant would work. Since the accounts_data_len already doesn't track everything that a validator holds on disk, this is used as an approximation. So I'd say that using a value that's close should also work.
    • A: Will use a starting/seed value at feature activation
  • Should the account storage overhead factor into the max accounts data size?
    • Since new accounts without data still consume on disk resources, this might be important. It'll impact computing and tracking the total account data size though.

Tasks

@brooksprumo brooksprumo self-assigned this Dec 3, 2021
@brooksprumo
Copy link
Contributor Author

@CriesofCarrots @t-nelson @sakridge @carllin @jackcmay @jeffwashington Per our call earlier, does this look right for what to do?

@jackcmay
Copy link
Contributor

jackcmay commented Dec 3, 2021

  • MAX_TOTAL_ACCOUNT_DATA_SIZE could be placed in ComputeBudget
  • Yes this work needs to be featurized

@ryoqun
Copy link
Member

ryoqun commented Jan 14, 2022

Alternative: store the Bank's accounts_data_len in the snapshot, and then that'll be used when loading from the snapshot.

@brooksprumo well, if accounts_data_len can be computed at the startup time, why bother to add new field to the snapshotted bank? as you know already, changing snapshot format will incur somewhat large labor..

@ryoqun
Copy link
Member

ryoqun commented Jan 14, 2022

i know. i'm very late to comment at this moment... could you polish up your (already well-written) issue description a bit, regarding the following questions?

Make transactions fail if they exceed the max accounts data size

so, if the cluster reaches to the 128GB of account data size as a whole, the cluster basically bricked? i think on-chain data only does increase over time like with other online platforms.

it might be preferable more nuanced preventive measures are taken like setting-up alerts, increase rent exponentially to the 128GB hard cap, pre-allocating the 128GB on disk or implementing stale account data eviction mechanism?

@brooksprumo
Copy link
Contributor Author

@ryoqun

if accounts_data_len can be computed at the startup time, why bother to add new field to the snapshotted bank? as you know already, changing snapshot format will incur somewhat large labor..

This was suggested by @sakridge to store the accounts_data_len in the snapshot instead of calculating it during startup. I just want to make sure that cluster-wide every node gets the same value of accounts_data_len for a given (rooted) bank, regardless if the node is just booting up, or if it's long-running. I'm not sure which is better—add field to snapshot, or compute at startup—I'm happy to go with the better method here. Is calculating at startup preferred in your opinion?

if the cluster reaches to the 128GB of account data size as a whole, the cluster basically bricked?

No. Only transactions that increase accounts data size with fail, but all other transactions will be unaffected.

it might be preferable more nuanced preventive measures are taken like setting-up alerts, increase rent exponentially to the 128GB hard cap, pre-allocating the 128GB on disk or implementing stale account data eviction mechanism?

I agree. I'll make the issue more clear that this is a stop-gap for the better preventative measures that you've outlined. Dynamic rent (#21348) is another issue I'm looking at. I like the ideal of setting up alerts; I'll look into adding that. Same with pre-allocating.

or implementing stale account data eviction mechanism?

My understanding is that this is what Rent is for. I'm not sure there's an acceptable way to evict rent-exempt account data, right? Happy to hear about use-cases for this though!

@brooksprumo
Copy link
Contributor Author

if accounts_data_len can be computed at the startup time, why bother to add new field to the snapshotted bank? as you know already, changing snapshot format will incur somewhat large labor..

More info: I just ran some builds on my GCE validator against mnb to see if the computed accounts data size from loading a snapshot matched regular replay for a bank at the same slot. The two values were different. I'm not sure yet why, but that is the reason why having the accounts data size in the snapshot would be needed.

@ryoqun
Copy link
Member

ryoqun commented Jan 20, 2022

thanks for more details.

Is calculating at startup preferred in your opinion?

yeah, but not strong opinion. the role of accounts_data_len would be like the snapshot hash. ideally, this should be able to calculate for any root. i'm a bit worried about silently introducing some discrepancy bug. as you just observed (#21604 (comment))

i think bank (snapshot) field should be avoided if possible. also, a small sysvar might be desired if unavoidable for the ultimate safe snapshot thing (pending for toooo long....) #6936

@brooksprumo
Copy link
Contributor Author

Closing this PR as it cannot be merged. It would introduce potential non-determinism. For more information, please refer to #27029

@brooksprumo brooksprumo closed this as not planned Won't fix, can't repro, duplicate, stale Feb 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants