perf(core/felt): reduce unneeded heap allocations#3565
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3565 +/- ##
==========================================
+ Coverage 75.77% 75.84% +0.06%
==========================================
Files 384 380 -4
Lines 33750 33740 -10
==========================================
+ Hits 25575 25591 +16
+ Misses 6386 6361 -25
+ Partials 1789 1788 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
Refactors felt.Felt JSON (un)marshalling to reduce heap allocations and tighten validation so only quoted 0x-prefixed hex strings are accepted in JSON contexts.
Changes:
- Reimplements
Felt.UnmarshalJSONwith fixed-size padding +hex.Decodeand stricter “must be quoted string” validation. - Reimplements
Felt.MarshalJSONto emit a quoted0xhex string without unnecessary leading zeros, with fewer allocations. - Adds comprehensive JSON marshal/unmarshal and round-trip tests, including the previously-missed “no quotes” invalid case.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| core/felt/felt.go | Updates JSON marshal/unmarshal implementations for lower allocation and stricter input rules. |
| core/felt/felt_test.go | Adds targeted test coverage for valid/invalid JSON inputs and round-trip behavior. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Yaroslav Kukharuk <i.kukharuk@gmail.com>
|
| Benchmark | old | new | Δ |
|---|---|---|---|
MarshalJSON/zero |
48.53 ns | 28.59 ns | −41.09% |
MarshalJSON/small |
62.81 ns | 32.22 ns | −48.71% |
MarshalJSON/address |
141.30 ns | 46.29 ns | −67.24% |
MarshalJSON/max_felt |
141.80 ns | 46.89 ns | −66.94% |
MarshalJSON/padded_address |
141.70 ns | 46.56 ns | −67.15% |
MarshalJSON/random |
141.65 ns | 47.08 ns | −66.76% |
UnmarshalJSON/zero |
85.66 ns | 63.04 ns | −26.41% |
UnmarshalJSON/small |
131.90 ns | 62.21 ns | −52.84% |
UnmarshalJSON/address |
438.65 ns | 62.65 ns | −85.72% |
UnmarshalJSON/max_felt |
445.75 ns | 62.93 ns | −85.88% |
UnmarshalJSON/padded_address |
438.10 ns | 62.86 ns | −85.65% |
UnmarshalJSON/random |
442.90 ns | 62.95 ns | −85.79% |
| geomean | 168.5 ns | 50.39 ns | −70.10% |
All deltas are statistically significant (p = 0.000, n = 10).
Memory (B/op)
| Benchmark | old | new | Δ |
|---|---|---|---|
MarshalJSON/zero |
8 B | 8 B | — |
MarshalJSON/small |
24 B | 16 B | −33.33% |
MarshalJSON/address |
288 B | 80 B | −72.22% |
MarshalJSON/max_felt |
288 B | 80 B | −72.22% |
MarshalJSON/padded_address |
288 B | 80 B | −72.22% |
MarshalJSON/random |
288 B | 80 B | −72.22% |
UnmarshalJSON/zero |
37 B | 0 B | −100.00% |
UnmarshalJSON/small |
48 B | 0 B | −100.00% |
UnmarshalJSON/address |
112 B | 0 B | −100.00% |
UnmarshalJSON/max_felt |
112 B | 0 B | −100.00% |
UnmarshalJSON/padded_address |
112 B | 0 B | −100.00% |
UnmarshalJSON/random |
112 B | 0 B | −100.00% |
Allocations (allocs/op)
| Benchmark | old | new | Δ |
|---|---|---|---|
MarshalJSON/zero |
1 | 1 | — |
MarshalJSON/small |
2 | 1 | −50% |
MarshalJSON/address |
4 | 1 | −75% |
MarshalJSON/max_felt |
4 | 1 | −75% |
MarshalJSON/padded_address |
4 | 1 | −75% |
MarshalJSON/random |
4 | 1 | −75% |
UnmarshalJSON/zero |
2 | 0 | −100% |
UnmarshalJSON/small |
2 | 0 | −100% |
UnmarshalJSON/address |
2 | 0 | −100% |
UnmarshalJSON/max_felt |
2 | 0 | −100% |
UnmarshalJSON/padded_address |
2 | 0 | −100% |
UnmarshalJSON/random |
2 | 0 | −100% |
The remaining 1 alloc on MarshalJSON is the returned []byte — unavoidable since the method contract is to return a slice.
Takeaways
UnmarshalJSONis now fully zero-alloc on every input shape. The old implementation allocated twice per call — once forstrings.ToLower/strings.Trim, once for the pooledbig.Intused bySetString.MarshalJSONdrops from 4 → 1 alloc and ~66% faster on full-width inputs. The single remaining alloc is the return slice; it's also 3.6× smaller (80 B vs 288 B) because the new code sizes it precisely instead of going through az.String()intermediate.- Geomean latency across all 12 sub-benchmarks improved by −70.10%; the UnmarshalJSON group is where the biggest wins are (−86% on any realistic address-sized input).
note: the original implementation was failing
no quotestest - it was successfully unmarshalling non-quoted hex strings, such as0xdeadbeef. I think it make sense to reject them as, JSON context, all of the felt values should be quoted strings