Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CLI flags to config LevelDB table/total sizes #981

Conversation

rroblak
Copy link
Contributor

@rroblak rroblak commented Aug 29, 2023

Description

I wired up CLI flags to allow configuring LevelDB table and total sizes:

  • --leveldb.compaction.table.size, LevelDB SSTable file size factor in MiB (default: 2)
  • --leveldb.compaction.table.multiplier, multiplier on LevelDB SSTable file size (default: 1)
  • --leveldb.compaction.total.size, total size factor in MiB of LevelDB levels (default: 10)
  • --leveldb.compaction.total.multiplier, multiplier on LevelDB total level size (default: 10)

N.B. that the default values for these configs are exactly the same as before this changset and so Bor behavior should not change unless these flags are deliberately overridden. Bor/Geth inherited the default values from the goleveldb
defaults
.

We (Alchemy) found it necessary to override these configs as follows to keep Bor archive nodes tracking the canonical chain:

  • --leveldb.compaction.table.size=4
  • --leveldb.compaction.total.size=20

These overrides double the size of LevelDB SSTable files (2 MiB -> 4 MiB) and also the total amount of data in each level (100 MiB -> 200 MiB, 1,000 MiB -> 2,000 MiB, etc.). The idea is to have LevelDB read and write data in larger chunks while keeping the proportional frequency of compaction operations the same as in the original defaults defined by Dean and Ghemawat.

Without these overrides we found that our archive nodes would tend to fall into a "LevelDB compaction loop of death" where the incoming stream of blockchain data could not be flowed into LevelDB's structure quickly enough, resulting in the node blocking writes for long periods of time while LevelDB's single-threaded compaction organized the data. Over time the nodes would fall farther and farther behind the canonical chain head, metaphorically dying a slow node's death.

These configs can be changed on existing node databases (resyncing is not necessary). LevelDB appears to work correctly with SSTable files of different sizes. Note that the database does not undergo any sort of migration when changing these configs. Only newly-written files (due to new data or compaction) are affected by these configs.

Changes

  • Bugfix (non-breaking change that solves an issue)
  • Hotfix (change that solves an urgent issue, and requires immediate attention)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (change that is not backwards-compatible and/or changes current functionality)
  • Changes only for a subset of nodes

Nodes audience

We added the following flags to our nodes' bor server invocation: --leveldb.compaction.table.size=4 --leveldb.compaction.total.size=20 to allow them to catch back up to the canonical chain head.

Checklist

  • I have added at least 2 reviewer or the whole pos-v1 team
  • I have added sufficient documentation in code
  • I will be resolving comments - if any - by pushing each fix in a separate commit and linking the commit hash in the comment reply
  • Created a task in Jira and informed the team for implementation in Erigon client (if applicable)
  • Includes RPC methods changes, and the Notion documentation has been updated

Testing

  • I have added unit tests
  • I have added tests to CI
  • I have tested this code manually on local environment
  • I have tested this code manually on remote devnet using express-cli
  • I have tested this code manually on mumbai
  • We have been running this code in production mainnet for a couple of months
  • I have created new e2e tests into express-cli

Manual tests

  1. Added flags as specified above and started bor server.
  2. Observed Bor archive node logs and block sync rate across many nodes in production.
  • Nodes that had previously fallen behind and had a sync rate of 0 at times increasing their block sync rate substantially and caught back up to head.
  1. Verified that JSON-RPC responses and request durations were nominal as compared to nodes without this patch.
  2. Also deployed to Bor full nodes to minimize config drift and observed no ill effects.

@rroblak rroblak force-pushed the rroblak/upstream-level-db-patch-develop-cherry branch from db548a5 to 675092d Compare August 29, 2023 22:03
I wired up CLI flags to allow configuring LevelDB table and total sizes:
  - `--leveldb.compaction.table.size`, LevelDB SSTable file size factor in MiB (default: 2)
  - `--leveldb.compaction.table.multiplier`, multiplier on LevelDB SSTable file size (default: 1)
  - `--leveldb.compaction.total.size`, total size factor in MiB of LevelDB levels (default: 10)
  - `--leveldb.compaction.total.multiplier`, multiplier on LevelDB total level size (default: 10)

N.B. that the default values for these configs are exactly the same as
before this changset and so Bor behavior should not change unless these
flags are deliberately overridden. Bor/Geth inherited the default values
from [the `goleveldb`
defaults](https://github.com/syndtr/goleveldb/blob/126854af5e6d8295ef8e8bee3040dd8380ae72e8/leveldb/opt/options.go).

We (Alchemy) found it necessary to override these configs as follows to
keep Bor archive nodes tracking the canonical chain:
  - `--leveldb.compaction.table.size=4`
  - `--leveldb.compaction.total.size=20`

These overrides double the size of LevelDB SSTable files (2 MiB -> 4
MiB) and also the total amount of data in each level (100 MiB -> 200
MiB, 1,000 MiB -> 2,000 MiB, etc.). The idea is to have LevelDB read and
write data in larger chunks while keeping the proportional frequency of
compaction operations the same as in the original defaults defined by
Dean and Ghemawat.

Without these overrides we found that our archive nodes would tend to
fall into a "LevelDB compaction loop of death" where the incoming stream
of blockchain data could not be flowed into LevelDB's structure quickly
enough, resulting in the node blocking writes for long periods of time
while LevelDB's single-threaded compaction organized the data.  Over
time the nodes would fall farther and farther behind the canonical chain
head, metaphorically dying a slow node's death.

These configs can be changed on existing node databases (resyncing is
not necessary). LevelDB appears to work correctly with SSTable files of
different sizes. Note that the database does not undergo any sort of
migration when changing these configs. Only newly-written files (due to
new data or compaction) are affected by these configs.
@rroblak rroblak force-pushed the rroblak/upstream-level-db-patch-develop-cherry branch from 675092d to 95953ad Compare August 29, 2023 22:23
@cffls
Copy link
Contributor

cffls commented Aug 30, 2023

Thanks @rroblak for contributing this feature and creating a detailed PR! Added a few comments about db option passing. There are some linter failures. You can check them by running make test locally.

internal/cli/server/config.go Show resolved Hide resolved
internal/cli/server/flags.go Show resolved Hide resolved
internal/cli/server/config.go Outdated Show resolved Hide resolved
core/rawdb/database.go Outdated Show resolved Hide resolved
node/node.go Outdated Show resolved Hide resolved
@rroblak rroblak requested a review from cffls September 8, 2023 17:24
Copy link
Contributor

@cffls cffls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you @rroblak for addressing the comments!

@marcello33 marcello33 merged commit ebc7dc2 into maticnetwork:develop Sep 12, 2023
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants