Skip to content

Exploration: TrackedArray #storages Map → Array#28

Draft
johanrd wants to merge 2 commits intomainfrom
perf/tracked-array-storages-array
Draft

Exploration: TrackedArray #storages Map → Array#28
johanrd wants to merge 2 commits intomainfrom
perf/tracked-array-storages-array

Conversation

@johanrd
Copy link
Copy Markdown
Owner

@johanrd johanrd commented Apr 20, 2026

Exploration PR — change from NVP's emberjs/ember.js#21221.

Replace TrackedArray's #storages: Map<number, Tag> with #storages: Array<Tag | undefined>. Deliberately without NVP's INDEX_STORAGE_THRESHOLD = 10000 cap — that would silently drop granular invalidation past index 10k.

Isolated microbench (bin/tracked-array-storages.bench.mjs)

pattern n Map Array delta
sequential read 1000 25.06 µs 4.71 µs −81 %
sequential read 5000 232.98 µs 24.34 µs −90 %
write-read cycle 1000 41.33 µs 7.90 µs −81 %
sparse hole-then-fill 1000 24.61 µs 3.48 µs −86 %
sparse random access 1000 27.33 µs 3.92 µs −86 %

Krausest tracerbench (80-fid)

The microbench win does NOT translate to Krausest.

phase Δms Δ%
duration +25 +1.20 % (regression)
render1000Items1End +1 +1.47 %
render5000Items2End +10 +1.54 %
clearManyItems2End +1 +1.92 %
render1000Items3End +3 +6.91 %
no phases improved

Saved to .bench/c-storages-array-80fid.json.

Why the divergence

The microbench simulates the data-structure swap in isolation using simple objects. In the real TrackedArray:

  • #storages is accessed through a private class field, which adds a slot lookup V8 handles uniformly for both Map and Array values
  • #dirtyCollection() sets this.#storages.length = 0 on every array mutation — this empties the array but keeps it allocated. Subsequent sequential re-fills trigger ElementsKind transitions that Map doesn't have to pay
  • The TrackedArray Proxy's get trap calls convertToInt#readStorageForconsumeTag per numeric access — the inner storage lookup is a smaller fraction of the trap total than the microbench measured in isolation
  • V8 may already have tuned Map for small-integer-keyed monomorphic access in the glimmer runtime's specific call patterns

Verdict

Closing. Microbench says Array should be faster; real-world Krausest says it's a regression. The isolated mechanism doesn't survive contact with the surrounding code's access pattern and V8's actual optimizations.

Good reminder that microbench speedups don't guarantee production speedups — the same lesson in reverse of #6/#7 where microbench showed huge wins that didn't register on Krausest either.

Per-index storage tags in TrackedArray currently use a Map<number, Tag>.
Map.get/set has hash + collision overhead. Array[i] is a direct slot
read, several times faster for numeric indices.

Isolated microbench (n=1000 sequential read):
  Map   25.06 µs/iter
  Array  4.71 µs/iter   (−81%)

At n=5000:
  Map   232.98 µs/iter
  Array  24.34 µs/iter  (−90%)

Sparse patterns (hole-then-fill, random access) also stayed 5–10×
faster than the Map version up to n=1000 — V8's dictionary-mode
de-opt concern didn't materialize at realistic sizes.

Unlike NVP's version, this PR doesn't add the INDEX_STORAGE_THRESHOLD
cap — that would silently lose granular invalidation past index 10000.
Array all the way down preserves semantics.
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 20, 2026

📊 Package size report   0.01%↑

File Before (Size / Brotli) After (Size / Brotli)
dist/dev/packages/@glimmer/validator/index.js 40 kB / 8.4 kB 0.9%↑40.3 kB / 2%↑8.6 kB
dist/prod/packages/@glimmer/validator/index.js 31.8 kB / 6.4 kB 1%↑32.1 kB / 3%↑6.6 kB
Total (Includes all files) 5.4 MB / 1.3 MB 0.01%↑5.4 MB / 0.03%↑1.3 MB
Tarball size 1.2 MB 0.03%↑1.2 MB

🤖 This report was automatically generated by pkg-size-action

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant