Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: update to walletdb/bdb version w/ mmap pre-sizing #4243

Closed
wants to merge 4 commits into from

Conversation

Roasbeef
Copy link
Member

@Roasbeef Roasbeef commented May 4, 2020

In this commit, we update to the walletdb/bdb version that now has
mmap-pre-sizing. By sizing the memory map ahead of time, we aim to avoid
the mass copies in the DB each time we need to expand the size of the
DB. The new version will set the mmap size to 2x the db size, which
allows the DB to double before we need to remap everything.

@joostjager
Copy link
Collaborator

Is there a PR on btcwallet to review, or do you request me to just review the lnd changes here?

@@ -165,7 +166,10 @@ func Open(dbPath string, modifiers ...OptionModifier) (*DB, error) {

// Specify bbolt freelist options to reduce heap pressure in case the
// freelist grows to be very large.
bdb, err := kvdb.Open(kvdb.BoltBackendName, path, opts.NoFreelistSync)
bboltOpts := &bbolt.Options{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: In the etcd PR NoFreeListSync is an lncfg option. I wonder if we ever intend to actually set this to false? And if no, then perhaps we could default it somewhere (walletdb?) such that it's less hurdle to later on remove them from the code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want this to be user-configurable as some users have reported slow startup times on weak machines (mobile, rpi) and opt to not set NoFreeListSync.

@@ -251,7 +255,10 @@ func createChannelDB(dbPath string) error {
}

path := filepath.Join(dbPath, dbName)
bdb, err := kvdb.Create(kvdb.BoltBackendName, path, true)
bboltOpts := &bbolt.Options{
NoFreelistSync: true,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though this is just for creating the db, should this be user-specified as well? Setting it one way or another shouldn't give any performance benefit as:

  • if we sync the freelist (this takes time), the next startup upon Open will be shorter
  • if we don't sync the freelist (quick Create time), the next startup upon Open will be longer as it needs to construct the freelist

So both scenarios are quicker in one area and slower in another. Only reason I think would be for consistency.

@Roasbeef
Copy link
Member Author

Roasbeef commented May 5, 2020

@joostjager yeah that's here: btcsuite/btcwallet#697

@Roasbeef Roasbeef requested review from Crypt-iQ and removed request for cfromknecht and joostjager May 5, 2020 23:17
@Roasbeef
Copy link
Member Author

Roasbeef commented May 5, 2020

Pushed up two small optimizations to the migration:

  1. Re-use the temp variables that remain the same and or are used just to access keys.
  2. Remove the usage of append for attemptInfo as this is where most of the memory is being allocated. As is, it'll end up doubling the size of the slice each time to make room for future appends. In our case, there will be no future appends, so we can just copy the exact amount of bytes we need.

For reference, here's a heap profile of the migration as is: https://gist.github.com/Roasbeef/fedd92542a496d30458bf120f5426817

attemptInfo = append(attemptInfo, zero[:]...)
// Copy over the attempt info into a new slice, the last 8
// bytes will be zero to mark a zero attempt time.
newAttemptInfo := make([]byte, len(attemptInfo))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can use a buffer pool both here and above, assuming the Put method copies the bytes over.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put method doesn't copy the value bytes over, it does copy the keys over: https://github.com/etcd-io/bbolt/blob/master/node.go#L139

@joostjager
Copy link
Collaborator

Can the migration problem be resolved by not using transactions? Make a (disk) copy of the database, apply all migrations without (big) transactions and if no error, delete old db and rename new?

@Crypt-iQ
Copy link
Collaborator

Crypt-iQ commented May 6, 2020

Can the migration problem be resolved by not using transactions? Make a (disk) copy of the database, apply all migrations without (big) transactions and if no error, delete old db and rename new?

This would work, could also split up transactions into smaller, less resource intensive ones. If the migration is to a new bucket, and the migration fails, the next run could just delete the new bucket and try again.

@Roasbeef
Copy link
Member Author

Roasbeef commented May 6, 2020

Can the migration problem be resolved by not using transactions? Make a (disk) copy of the database, apply all migrations without (big) transactions and if no error, delete old db and rename new?

What does this give us that they dry run mode doesn't? In the end, we need to commit it all in a single transaction.

@joostjager
Copy link
Collaborator

That you don't need to run it all in one transaction. Copying over the migrated database is the transaction.

@cfromknecht cfromknecht mentioned this pull request May 6, 2020
@Roasbeef
Copy link
Member Author

Roasbeef commented May 6, 2020

Gotcha makes sense, for this I think we just want to get down the memory usage to something that works with older legacy nodes that have numerous payments, if possible. Otherwise we can consider that route, but then we create an entirely new project. Also sounds like this would possibly better be implemented as a standalone tool? Also possibly related to #4131, as we might as well compact the database if we're going to copy it.

In this commit, we update to the `walletdb/bdb` version that now has
mmap-pre-sizing. By sizing the memory map ahead of time, we aim to avoid
the mass copies in the DB each time we need to expand the size of the
DB. The new version will set the mmap size to 2x the db size, which
allows the DB to double before we need to remap everything.
In this commit, we attempt to optimize `migration13` further by no
longer appending the zero bytes to the `attemptInfo` slice in-place.
Internally, the append will attempt to double the backing slice capacity
if it needs to re-allocate in order to fulfil the append, which is the
case in this instance.

Instead, we make a new slice, with a precise size, then copy over the
trimmed `attemptInfo`. The new slice will have 8 zero bytes at the end.
@joostjager
Copy link
Collaborator

I wouldn't call it an entirely new project. The copy can be made before executing any of the migrations. That way the older migrations don't need any changes. Then we can create a new migration interface without a transaction and put that on migration 13. Finally copy over the database, indeed after possibly compacting. Or better, do some renames to make the replace operation atomic.

@Roasbeef
Copy link
Member Author

Roasbeef commented May 7, 2020

By new project I just mean that we can possibly look into doing it for new migrations going forward. All we're after here is memory optimizations for the existing migrations. Also as it would be an entirely new way of doing migrations, it would need to have an adequate testing/vetting phase.

@lightninglabs-deploy
Copy link

@Crypt-iQ: review reminder
@Roasbeef, remember to re-request review from reviewers when ready

@lightninglabs-deploy
Copy link

Closing due to inactivity

2 similar comments
@lightninglabs-deploy
Copy link

Closing due to inactivity

@lightninglabs-deploy
Copy link

Closing due to inactivity

@ellemouton ellemouton closed this Jul 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants