Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Split reflists to share their contents across snapshots #1282

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

neolynx
Copy link
Member

@neolynx neolynx commented Apr 21, 2024

Replaces #1235

This builds on top of #1222, #1227, and #1233, and is thus in draft state until those are merged in. The only commit that's actually new is the very last one, whose commit message I copied below. (Even that single commit is admittedly quite big, but a sizable chunk of the changes are just plumbing a new RefListCollection around across the code.)

Description of the Change

In current aptly, each repository and snapshot has its own reflist in
the database. This brings a few problems with it:

  • Given a sufficiently large repositories and snapshots, these lists can
    get enormous, reaching >1MB. This is a problem for LevelDB's overall
    performance, as it tends to prefer values around the confiruged block
    size (defaults to just 4KiB).
  • When you take these large repositories and snapshot them, you have a
    full, new copy of the reflist, even if only a few packages changed.
    This means that having a lot of snapshots with a few changes causes
    the database to basically be full of largely duplicate reflists.
  • All the duplication also means that many of the same refs are being
    loaded repeatedly, which can cause some slowdown but, more notably,
    eats up huge amounts of memory.
  • Adding on more and more new repositories and snapshots will cause the
    time and memory spent on things like cleanup and publishing to grow
    roughly linearly.

At the core, there are two problems here:

  • Reflists get very big because there are just a lot of packages.
  • Different reflists can tend to duplicate much of the same contents.

Split reflists aim at solving this by separating reflists into 64
buckets. Package refs are sorted into individual buckets according to
the following system:

  • Take the first 3 letters of the package name, after dropping a lib
    prefix. (Using only the first 3 letters will cause packages with
    similar prefixes to end up in the same bucket, under the assumption
    that packages with similar names tend to be updated together.)
  • Take the 64-bit xxhash of these letters. (xxhash was chosen because it
    relatively good distribution across the individual bits, which is
    important for the next step.)
  • Use the first 6 bits of the hash (range [0:63]) as an index into the
    buckets.

Once refs are placed in buckets, a sha256 digest of all the refs in the
bucket is taken. These buckets are then stored in the database, split
into roughly block-sized segments, and all the repositories and
snapshots simply store an array of bucket digests.

This approach means that repositories and snapshots can share their
reflist buckets
. If a snapshot is taken of a repository, it will have
the same contents, so its split reflist will point to the same buckets
as the base repository, and only one copy of each bucket is stored in
the database. When some packages in the repository change, only the
buckets containing those packages will be modified; all the other
buckets will remain unchanged, and thus their contents will still be
shared. Later on, when these reflists are loaded, each bucket is only
loaded once, short-cutting loaded many megabytes of data. In effect,
split reflists are essentially copy-on-write, with only the changed
buckets stored individually.

Changing the disk format means that a migration needs to take place, so
that task is moved into the database cleanup step, which will migrate
reflists over to split reflists, as well as delete any unused reflist
buckets.

All the reflist tests are also changed to additionally test out split
reflists; although the internal logic is all shared (since buckets are,
themselves, just normal reflists), some special additions are needed to
have native versions of the various reflist helper methods.

In our tests, we've observed the following improvements:

  • Memory usage during publish and database cleanup, with
    GOMEMLIMIT=2GiB, goes down from ~3.2GiB (larger than the memory
    limit!) to ~0.7GiB, a decrease of ~4.5x.
  • Database size decreases from 1.3GB to 367MB.

In my local tests, publish times had also decreased down to mere
seconds but the same effect wasn't observed on the server, with the
times staying around the same. My suspicions are that this is due to I/O
performance: my local system is an M1 MBP, which almost certainly has
much faster disk speeds than our DigitalOcean block volumes. Split
reflists include a side effect of requiring more random accesses from
reading all the buckets by their keys, so if your random I/O
performance is slower, it might cancel out the benefits. That being
said, even in that case, the memory usage and database size advantages
still persist.

Checklist

  • unit-test added (if change is algorithm)
  • functional test added/updated (if change is functional)
  • man page updated (if applicable)
  • bash completion updated (if applicable)
  • documentation updated
  • author name in AUTHORS

It would be awesome if anyone could also test this out and report how it affects their performance & memory usage.

@neolynx neolynx added fix lint The PR has golangci-lint errors needs review Ready for review & merge and removed fix lint The PR has golangci-lint errors labels Apr 21, 2024
@neolynx neolynx force-pushed the feature/split-reflists branch 2 times, most recently from 70035b8 to 7abea61 Compare April 24, 2024 15:37
@neolynx neolynx added the fix tests Tests are failing label May 10, 2024
@neolynx
Copy link
Member Author

neolynx commented Jun 8, 2024

@refi64 could you have a look at https://github.com/aptly-dev/aptly/actions/runs/9429650201/job/25976276670?pr=1282#step:9:438 ?

PANIC: snapshot_test.go:26: SnapshotSuite.TestNewSnapshotFromRepository

... Panic: runtime error: invalid memory address or nil pointer dereference (PC=0x43D1FE)

/opt/hostedtoolcache/go/1.21.10/x64/src/runtime/panic.go:914
  in gopanic
/opt/hostedtoolcache/go/1.21.10/x64/src/runtime/panic.go:261
  in panicmem
/opt/hostedtoolcache/go/1.21.10/x64/src/runtime/signal_unix.go:861
  in sigpanic
reflist.go:495
  in SplitRefList.Len
snapshot.go:48
  in NewSnapshotFromRepository
snapshot_test.go:35
  in SnapshotSuite.TestNewSnapshotFromRepository
/opt/hostedtoolcache/go/1.21.10/x64/src/reflect/value.go:380
  in Value.Call
/opt/hostedtoolcache/go/1.21.10/x64/src/runtime/asm_amd64.s:1650
  in goexit

@neolynx neolynx self-assigned this Jun 8, 2024
refi64 and others added 5 commits June 17, 2024 12:02
In some local tests w/ a slowed down filesystem, this massively cut down
on the time to clean up a repository by ~3x, bringing a total 'publish
update' time from ~16s to ~13s.

Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
In current aptly, each repository and snapshot has its own reflist in
the database. This brings a few problems with it:

- Given a sufficiently large repositories and snapshots, these lists can
  get enormous, reaching >1MB. This is a problem for LevelDB's overall
  performance, as it tends to prefer values around the confiruged block
  size (defaults to just 4KiB).
- When you take these large repositories and snapshot them, you have a
  full, new copy of the reflist, even if only a few packages changed.
  This means that having a lot of snapshots with a few changes causes
  the database to basically be full of largely duplicate reflists.
- All the duplication also means that many of the same refs are being
  loaded repeatedly, which can cause some slowdown but, more notably,
  eats up huge amounts of memory.
- Adding on more and more new repositories and snapshots will cause the
  time and memory spent on things like cleanup and publishing to grow
  roughly linearly.

At the core, there are two problems here:

- Reflists get very big because there are just a lot of packages.
- Different reflists can tend to duplicate much of the same contents.

*Split reflists* aim at solving this by separating reflists into 64
*buckets*. Package refs are sorted into individual buckets according to
the following system:

- Take the first 3 letters of the package name, after dropping a `lib`
  prefix. (Using only the first 3 letters will cause packages with
  similar prefixes to end up in the same bucket, under the assumption
  that packages with similar names tend to be updated together.)
- Take the 64-bit xxhash of these letters. (xxhash was chosen because it
  relatively good distribution across the individual bits, which is
  important for the next step.)
- Use the first 6 bits of the hash (range [0:63]) as an index into the
  buckets.

Once refs are placed in buckets, a sha256 digest of all the refs in the
bucket is taken. These buckets are then stored in the database, split
into roughly block-sized segments, and all the repositories and
snapshots simply store an array of bucket digests.

This approach means that *repositories and snapshots can share their
reflist buckets*. If a snapshot is taken of a repository, it will have
the same contents, so its split reflist will point to the same buckets
as the base repository, and only one copy of each bucket is stored in
the database. When some packages in the repository change, only the
buckets containing those packages will be modified; all the other
buckets will remain unchanged, and thus their contents will still be
shared. Later on, when these reflists are loaded, each bucket is only
loaded once, short-cutting loaded many megabytes of data. In effect,
split reflists are essentially copy-on-write, with only the changed
buckets stored individually.

Changing the disk format means that a migration needs to take place, so
that task is moved into the database cleanup step, which will migrate
reflists over to split reflists, as well as delete any unused reflist
buckets.

All the reflist tests are also changed to additionally test out split
reflists; although the internal logic is all shared (since buckets are,
themselves, just normal reflists), some special additions are needed to
have native versions of the various reflist helper methods.

In our tests, we've observed the following improvements:

- Memory usage during publish and database cleanup, with
  `GOMEMLIMIT=2GiB`, goes down from ~3.2GiB (larger than the memory
  limit!) to ~0.7GiB, a decrease of ~4.5x.
- Database size decreases from 1.3GB to 367MB.

*In my local tests*, publish times had also decreased down to mere
seconds but the same effect wasn't observed on the server, with the
times staying around the same. My suspicions are that this is due to I/O
performance: my local system is an M1 MBP, which almost certainly has
much faster disk speeds than our DigitalOcean block volumes. Split
reflists include a side effect of requiring more random accesses from
reading all the buckets by their keys, so if your random I/O
performance is slower, it might cancel out the benefits. That being
said, even in that case, the memory usage and database size advantages
still persist.

Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
needed by:
- deb/reflist.go:431:15: min requires go1.21 or later
- deb/reflist.go:720:31: cannot convert digest (variable of type []byte) to type reflistDigestArray: conversion of slices to arrays requires go1.20 or later
Copy link

codecov bot commented Jun 17, 2024

Codecov Report

Attention: Patch coverage is 83.19018% with 137 lines in your changes missing coverage. Please review.

Project coverage is 74.77%. Comparing base (b5bf2cb) to head (6e5b3fd).

Files Patch % Lines
deb/reflist.go 87.75% 39 Missing and 16 partials ⚠️
api/db.go 20.75% 38 Missing and 4 partials ⚠️
cmd/db_cleanup.go 67.05% 20 Missing and 8 partials ⚠️
deb/publish.go 80.43% 6 Missing and 3 partials ⚠️
deb/graph.go 33.33% 2 Missing ⚠️
api/repos.go 90.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1282      +/-   ##
==========================================
+ Coverage   74.49%   74.77%   +0.28%     
==========================================
  Files         146      146              
  Lines       16540    17096     +556     
==========================================
+ Hits        12321    12784     +463     
- Misses       3241     3311      +70     
- Partials      978     1001      +23     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@neolynx neolynx removed the fix tests Tests are failing label Jun 17, 2024
@neolynx neolynx requested a review from a team June 17, 2024 14:29
@neolynx
Copy link
Member Author

neolynx commented Jun 17, 2024

tests fixed, made compatible with go 1.19 (and current debian/bookworm)

@refi64
Copy link
Contributor

refi64 commented Jun 18, 2024

Sorry for the delays, I've been a bit occupied elsewhere lately 😅 I had looked into the nil errors in the tests, but it seemed to specifically come from the way the tests were initializing the repos; normally they're created and then have something loaded into them, but the tests create them and immediately start calling methods w/o any load. Changing that around also makes the tests pass and looks like this:

diff --git a/deb/local_test.go b/deb/local_test.go
index b87b1b62..6d753d83 100644
--- a/deb/local_test.go
+++ b/deb/local_test.go
@@ -40,12 +40,16 @@ func (s *LocalRepoSuite) TestString(c *C) {
 }
 
 func (s *LocalRepoSuite) TestNumPackages(c *C) {
-	c.Check(NewLocalRepo("lrepo", "My first repo").NumPackages(), Equals, 0)
+	r := NewLocalRepo("lrepo", "My first repo")
+	r.packageRefs = NewSplitRefList()
+	c.Check(r.NumPackages(), Equals, 0)
 	c.Check(s.repo.NumPackages(), Equals, 2)
 }
 
 func (s *LocalRepoSuite) TestRefList(c *C) {
-	c.Check(NewLocalRepo("lrepo", "My first repo").RefList(), IsNil)
+	r := NewLocalRepo("lrepo", "My first repo")
+	r.packageRefs = NewSplitRefList()
+	c.Check(r.RefList().Len(), Equals, 0)
 	c.Check(s.repo.RefList(), Equals, s.reflist)
 }
 
@@ -151,7 +155,6 @@ func (s *LocalRepoCollectionSuite) TestUpdateLoadComplete(c *C) {
 	r, err = collection.ByName("local1")
 	c.Assert(err, IsNil)
 	c.Assert(r.packageRefs, IsNil)
-	c.Assert(r.NumPackages(), Equals, 0)
 	c.Assert(s.collection.LoadComplete(r, s.reflistCollection), IsNil)
 	c.Assert(r.NumPackages(), Equals, 2)
 }
diff --git a/deb/remote_test.go b/deb/remote_test.go
index 3e05ef7e..baa5f2c6 100644
--- a/deb/remote_test.go
+++ b/deb/remote_test.go
@@ -139,6 +139,7 @@ func (s *RemoteRepoSuite) TestString(c *C) {
 }
 
 func (s *RemoteRepoSuite) TestNumPackages(c *C) {
+	s.repo.packageRefs = NewSplitRefList()
 	c.Check(s.repo.NumPackages(), Equals, 0)
 	s.repo.packageRefs = s.reflist
 	c.Check(s.repo.NumPackages(), Equals, 3)
@@ -727,7 +728,6 @@ func (s *RemoteRepoCollectionSuite) TestUpdateLoadComplete(c *C) {
 	r, err = collection.ByName("yandex")
 	c.Assert(err, IsNil)
 	c.Assert(r.packageRefs, IsNil)
-	c.Assert(r.NumPackages(), Equals, 0)
 	c.Assert(s.collection.LoadComplete(r, s.refListCollection), IsNil)
 	c.Assert(r.NumPackages(), Equals, 3)
 }
diff --git a/deb/snapshot_test.go b/deb/snapshot_test.go
index 805ccc8e..4886ebfb 100644
--- a/deb/snapshot_test.go
+++ b/deb/snapshot_test.go
@@ -31,7 +31,7 @@ func (s *SnapshotSuite) TestNewSnapshotFromRepository(c *C) {
 	c.Check(snapshot.SourceKind, Equals, SourceRemoteRepo)
 	c.Check(snapshot.SourceIDs, DeepEquals, []string{s.repo.UUID})
 
-	s.repo.packageRefs = nil
+	s.repo.packageRefs = NewSplitRefList()
 	_, err := NewSnapshotFromRepository("snap2", s.repo)
 	c.Check(err, ErrorMatches, ".*not updated")
 }

I'm...not sure which approach is better, really? My concern with just making NumPackages() return 0 on nil was that it would mask actual bugs where aptly tries to use a repository / snapshot / etc that never had the Load methods called, but otoh "the new object you created isn't immediately functional" is probably a recipe for confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs review Ready for review & merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants