Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

channeldb/kvdb/bolt: automated database compaction #4131

Closed
Roasbeef opened this issue Mar 31, 2020 · 0 comments · Fixed by #4667
Closed

channeldb/kvdb/bolt: automated database compaction #4131

Roasbeef opened this issue Mar 31, 2020 · 0 comments · Fixed by #4667
Assignees
Labels
advanced Issues suitable for very experienced developers database Related to the database/storage of LND enhancement Improvements to existing features / behaviour v0.12
Milestone

Comments

@Roasbeef
Copy link
Member

With the way our primary database bbolt works, disk space is actually never reclaimed when something is deleted. Instead, page where the data is stored is eventually added to a free list, which will then be re-used when we need to update or insert some new data. As a result, the database and the free list will continue to grow until compaction is done. For those users that sync the free list (--sync-freelist), a larger free list due to infrequent compactions will also result in longer start up time.

Ideally we modify things so we either do a compaction:

  • Each time we restart
  • With each new major migration
  • Continually in the background periodically

The first two options seem the easiest, as we don't need to try to compact a live database. Users can do this manually as is, but we should wrap it up in a nice easy to use command so it's less error prone.

As far as the compaction logic, we can borrow the logic for bbolt's compaction command as is. Compaction at a high level is actually just copying over all the keys, buckets and sequence numbers into a new database. Typically a user needs to manually rename files in this process, but our automated version should do this all automatically.

The one thing we'll need to be mindful of us that our automated compaction is able to survive restarts properly, and also doesn't act if there isn't enough disk space for the compacted copy.

@Roasbeef Roasbeef added enhancement Improvements to existing features / behaviour database Related to the database/storage of LND labels Mar 31, 2020
@Roasbeef Roasbeef added this to the 0.11.0 milestone Mar 31, 2020
@Roasbeef Roasbeef added the advanced Issues suitable for very experienced developers label Mar 31, 2020
@Roasbeef Roasbeef changed the title channeldb/kvdb/bolt: autoamted database compaction channeldb/kvdb/bolt: automated database compaction Mar 31, 2020
@cfromknecht cfromknecht added this to To do in v0.11.0-beta via automation Apr 21, 2020
@cfromknecht cfromknecht removed this from the 0.11.0 milestone Jun 17, 2020
@cfromknecht cfromknecht removed the v0.11 label Jun 17, 2020
@cfromknecht cfromknecht removed this from To do in v0.11.0-beta Jun 17, 2020
@Roasbeef Roasbeef added this to the 0.12.0 milestone Jul 30, 2020
@Roasbeef Roasbeef added the v0.12 label Jul 30, 2020
@Roasbeef Roasbeef added this to To do in v0.12.0-beta via automation Jul 30, 2020
@Roasbeef Roasbeef assigned guggero and unassigned bhandras and cfromknecht Oct 1, 2020
@Roasbeef Roasbeef moved this from To do to In progress in v0.12.0-beta Oct 8, 2020
@Roasbeef Roasbeef moved this from In progress to Review in progress in v0.12.0-beta Nov 5, 2020
@Roasbeef Roasbeef moved this from Review in progress to Reviewer approved in v0.12.0-beta Nov 12, 2020
v0.12.0-beta automation moved this from Reviewer approved to Done Nov 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
advanced Issues suitable for very experienced developers database Related to the database/storage of LND enhancement Improvements to existing features / behaviour v0.12
Projects
No open projects
v0.12.0-beta
  
Done
Development

Successfully merging a pull request may close this issue.

4 participants