Skip to content

runtime: Swiss Table maps can double size multiple times when deleting/adding elements #70886

@thepudds

Description

@thepudds

Go version

go version go1.24rc1

Output of go env in your module/workspace:

N/A

What did you do?

When repeatedly deleting and adding elements from a Swiss Table map but without increasing the count of elements, the map can grow multiple times (e.g., from 128 slots to 1024 slots in a ~30s test).

I think there is currently a simplification in the current implementation (compared to Abseil and the CockroachDB implementations) such that it is expected that some growth occurs in lieu of a same-sized grow or rehashing in place, but it seemed worth a tracking bug that tables can end up growing substantially larger.

Here's a sample test demonstrating this:
https://go.dev/play/p/RITVDebV5op?v=gotip (original)
https://go.dev/play/p/xiWudCQADt5?v=gotip (edit: more reliable test from below)

It's runnable on the playground, where it sometimes fails or passes, though the main intent is to run locally.

Using that test, here's a sample run that starts with a ~10% load (14 elements in a map with an underlying table size of 128), then loops 1M times deleting and adding a different element (while never going above 14 elements in the map). The map's underlying table grows from 128 slots to 512 slots while doing that delete/add cycle 1M times:

$ go1.24rc1 test -count=3 -v -loop=1000000 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN   TestTombstoneGrow
=== RUN   TestTombstoneGrow/tableSize=128/elems=14/load=0.109
    main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
    table: growing: old size=128, new size=256, map=0xc00002b140
    table: growing: old size=256, new size=512, map=0xc00002b140
    main_test.go:53: [after delete/add loop]  len(m)=14, underlying table size=512, map=0xc00002b140
    main_test.go:56: got 2 allocations per run
--- FAIL: TestTombstoneGrow (0.34s)
    --- FAIL: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (0.34s)

Those results above include using a minor hack into the runtime to report the underlying table size and print when tables grow.

If we instead loop 100M times on that same test, the map grows from 128 table slots to 1024 table slots:

$ go1.24rc1 test -count=3 -v -loop=100000000 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN   TestTombstoneGrow
=== RUN   TestTombstoneGrow/tableSize=128/elems=14/load=0.109
    main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
    table: growing: old size=128, new size=256, map=0xc00002b140
    table: growing: old size=256, new size=512, map=0xc00002b140
    table: growing: old size=512, new size=1024, map=0xc00002b140
    main_test.go:53: [after delete/add loop]  len(m)=14, underlying table size=1024, map=0xc00002b140
    main_test.go:56: got 2 allocations per run
--- FAIL: TestTombstoneGrow (33.86s)
    --- FAIL: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (33.86s)

If we just loop, say, 100 times, the table does not grow, as expected:

$ go1.24rc1 test -count=3 -v -loop=100 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN   TestTombstoneGrow
=== RUN   TestTombstoneGrow/tableSize=128/elems=14/load=0.109
    main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
    main_test.go:53: [after delete/add loop]  len(m)=14, underlying table size=128, map=0xc00002b140
--- PASS: TestTombstoneGrow (0.00s)
    --- PASS: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (0.00s)

One note of caution regarding the accuracy of this as a bug report -- test pass/failure here is being reported using testing.AllocsPerRun to see if an alloc occurs, but either I'm holding it wrong or seems to be flakey or both. (I was purposefully not using a more conventional runs number like 100, but maybe that's a mistake).

CC @prattmic

What did you see happen?

8x memory used.

What did you expect to see?

Less than 8x. Using an extra ~2x memory might be OK as a near-term simplification, but 8x seems high, and the memory can grow further.

Metadata

Metadata

Assignees

No one assigned

    Labels

    FixPendingIssues that have a fix which has not yet been reviewed or submitted.compiler/runtimeIssues related to the Go compiler and/or runtime.

    Type

    No type

    Projects

    Status

    Done

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions