Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR modifies DA (Data Availability) height initialization logic to support configuration-driven overrides, removes the generic type parameter from the cache layer and replaces LRU-backed storage with plain map-backed storage under unified locking, and updates dependent code to use the simplified cache API. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).
|
|
Claude finished @julienrbrt's task in 2m 19s —— View job Code Review
SummaryThis is a clean simplification — removing the LRU dependency and replacing bounded caches with plain Go maps backed by a single mutex. The behavior is equivalent for the production workloads this system sees, and the code is significantly easier to reason about. The DA height initialization refactor also cleanly addresses the prior A few things worth discussing below. Issues1.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@block/internal/syncing/syncer.go`:
- Around line 359-362: The daHeight computation uses min(state.DAHeight-1, 0)
which always yields 0; change it to compute a safe candidate from state.DAHeight
(use candidate = state.DAHeight-1 when state.DAHeight > 0, else candidate = 0)
and then set daHeight = max(s.genesis.DAStartHeight, candidate). Keep the
subsequent conditional that bumps daHeight to max(daHeight, s.cache.DaHeight())
when s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight;
update the code around s.genesis.DAStartHeight, state.DAHeight, s.headerStore,
s.cache.DaHeight(), and state.LastBlockHeight accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 08094f15-06b3-4f6a-bf87-9ba1939c5d75
📒 Files selected for processing (1)
block/internal/syncing/syncer.go
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3204 +/- ##
==========================================
+ Coverage 61.42% 61.45% +0.02%
==========================================
Files 120 120
Lines 12449 12462 +13
==========================================
+ Hits 7647 7658 +11
- Misses 3942 3945 +3
+ Partials 860 859 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
block/internal/syncing/syncer.go (1)
359-362:⚠️ Potential issue | 🔴 CriticalBug:
min(state.DAHeight-1, 0)always evaluates to 0 foruint64.This is a critical logic error that was flagged in a previous review but remains unfixed:
- When
state.DAHeight > 0: e.g., ifstate.DAHeight = 5, thenmin(4, 0) = 0- When
state.DAHeight = 0: underflows toMaxUint64, thenmin(MaxUint64, 0) = 0The result is that
daHeightalways equalss.genesis.DAStartHeight, completely ignoring the persistedstate.DAHeight. This contradicts the comment at line 356-358 which states the intent is to use the maximum of genesis, state, and cache DA heights.🐛 Proposed fix
// Set DA height to the maximum of the genesis start height, the state's DA height, and the cached DA height. // The cache's DaHeight() is initialized from store metadata, so it's always correct even after cache clear. // Only use cache.DaHeight() when P2P is actively syncing (headerStore has higher height than current state). - daHeight := max(s.genesis.DAStartHeight, min(state.DAHeight-1, 0)) + daHeight := s.genesis.DAStartHeight + if state.DAHeight > 0 { + // Use state.DAHeight - 1 to re-fetch the last DA height in case it wasn't fully processed + daHeight = max(daHeight, state.DAHeight-1) + } if s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight { daHeight = max(daHeight, s.cache.DaHeight()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/syncing/syncer.go` around lines 359 - 362, The computation for daHeight incorrectly uses min(state.DAHeight-1, 0) on a uint64 (always 0); instead compute a safe state candidate by checking if state.DAHeight > 0 then use state.DAHeight-1 else use 0, then set daHeight = max(s.genesis.DAStartHeight, stateCandidate) and finally, if s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight keep the existing behavior of daHeight = max(daHeight, s.cache.DaHeight()); update the logic around the daHeight variable and reference s.genesis.DAStartHeight, state.DAHeight, s.cache.DaHeight(), s.headerStore.Height(), and state.LastBlockHeight when making the change.
🧹 Nitpick comments (1)
block/internal/syncing/syncer.go (1)
357-358: Comment may be misleading about cache behavior after clear.The comment states "The cache's DaHeight() is initialized from store metadata, so it's always correct even after cache clear." However, based on the relevant code snippets:
RestoreFromStore()(context snippet 1) only runs once at initializationmaxDAHeight(context snippet 3) is updated viasetMaxDAHeight()only when items are added- After a cache clear during syncing,
maxDAHeightcan regress if no new items with higher DA heights are subsequently addedThis means
s.cache.DaHeight()may return a lower value than expected after a cache clear, which could cause re-fetching of already-processed DA heights. Consider clarifying the comment or verifying the intended behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/syncing/syncer.go` around lines 357 - 358, The comment incorrectly guarantees cache.DaHeight() stays correct after a clear; because RestoreFromStore() runs only at init and maxDAHeight is updated only via setMaxDAHeight() when items are added, clearing the cache can leave maxDAHeight too low and cause re-fetches. Fix by either updating the comment to state that DaHeight may regress after a cache clear unless restored from persistent metadata, or change the logic: on cache.Clear()/during syncing ensure RestoreFromStore() or a persistent max DA readback is invoked (or retain/restore maxDAHeight from store metadata) so that s.cache.DaHeight() reflects the true maximum; reference RestoreFromStore, setMaxDAHeight, maxDAHeight and cache.DaHeight in your change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/config/config.go`:
- Line 570: Fix the typo in the flag help string passed to cmd.Flags().String
for FlagDBPath in config.go: change the duplicated wording "path for for node
database" to a correct phrase such as "path for node database" (or "path to node
database"); ensure the call creating the flag remains
cmd.Flags().String(FlagDBPath, def.DBPath, "<corrected help text>").
---
Duplicate comments:
In `@block/internal/syncing/syncer.go`:
- Around line 359-362: The computation for daHeight incorrectly uses
min(state.DAHeight-1, 0) on a uint64 (always 0); instead compute a safe state
candidate by checking if state.DAHeight > 0 then use state.DAHeight-1 else use
0, then set daHeight = max(s.genesis.DAStartHeight, stateCandidate) and finally,
if s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight keep
the existing behavior of daHeight = max(daHeight, s.cache.DaHeight()); update
the logic around the daHeight variable and reference s.genesis.DAStartHeight,
state.DAHeight, s.cache.DaHeight(), s.headerStore.Height(), and
state.LastBlockHeight when making the change.
---
Nitpick comments:
In `@block/internal/syncing/syncer.go`:
- Around line 357-358: The comment incorrectly guarantees cache.DaHeight() stays
correct after a clear; because RestoreFromStore() runs only at init and
maxDAHeight is updated only via setMaxDAHeight() when items are added, clearing
the cache can leave maxDAHeight too low and cause re-fetches. Fix by either
updating the comment to state that DaHeight may regress after a cache clear
unless restored from persistent metadata, or change the logic: on
cache.Clear()/during syncing ensure RestoreFromStore() or a persistent max DA
readback is invoked (or retain/restore maxDAHeight from store metadata) so that
s.cache.DaHeight() reflects the true maximum; reference RestoreFromStore,
setMaxDAHeight, maxDAHeight and cache.DaHeight in your change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7b99f109-c982-4c0c-a05c-190556abf106
📒 Files selected for processing (3)
block/internal/syncing/syncer.gopkg/config/config.gopkg/config/config_test.go
✅ Files skipped from review due to trivial changes (1)
- pkg/config/config_test.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pkg/config/config.go (1)
570-570:⚠️ Potential issue | 🟡 MinorFix the duplicated word in the DB path help text.
--helpcurrently printspath for for node database.📝 Suggested fix
- cmd.Flags().String(FlagDBPath, def.DBPath, "path for for node database") + cmd.Flags().String(FlagDBPath, def.DBPath, "path to node database")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/config/config.go` at line 570, The help text for the DB path flag contains a duplicated word; update the call to cmd.Flags().String(FlagDBPath, def.DBPath, ...) to remove the extra "for" so the description reads "path for node database" (or a clearer alternative like "path to node database"); edit the string literal passed to cmd.Flags().String where FlagDBPath and def.DBPath are used to fix the typo.
🧹 Nitpick comments (1)
block/internal/syncing/syncer.go (1)
367-374: Treat the manual DA start-height as an unsafe override and validate it.A mistyped value here can move the follower before
s.genesis.DAStartHeightor beyond the recovered watermark and silently skip historical DA heights. Please reject impossible values, and at least emit a warning when operators opt into an unsafe jump.As per coding guidelines "Validate all inputs from external sources in Go code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/syncing/syncer.go` around lines 367 - 374, The manual DA start-height override (s.config.DA.StartHeight) must be validated: ensure startHeight >= s.genesis.DAStartHeight and startHeight <= daHeight (the recovered watermark) and reject (return an error) if out of that range; if the value is in-range but not equal to the current daHeight emit a warning via s.logger.Warn() that this is an unsafe jump/override; keep the existing info log when accepted but do not silently accept impossible values—use s.logger to warn and return a clear error when validation fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@block/internal/syncing/syncer.go`:
- Around line 356-365: The current logic uses s.cache.DaHeight() (a global max)
to bump daHeight when headerStore is ahead, which can skip DA ranges; instead,
derive the bump from the per-block included-by entry for state.LastBlockHeight
(the persisted per-block inclusion height written by the submitter) or leave
daHeight unchanged if that per-block entry is missing. Concretely: replace the
branch that uses s.cache.DaHeight() with a lookup of the included-height for
state.LastBlockHeight (or call the cache/manager API that returns the
included-by height for a specific block) and max that value into daHeight only
when that per-block value exists; otherwise do not use the cache-wide
s.cache.DaHeight(). Ensure you still respect s.genesis.DAStartHeight and the
state.DAHeight-1 logic.
---
Duplicate comments:
In `@pkg/config/config.go`:
- Line 570: The help text for the DB path flag contains a duplicated word;
update the call to cmd.Flags().String(FlagDBPath, def.DBPath, ...) to remove the
extra "for" so the description reads "path for node database" (or a clearer
alternative like "path to node database"); edit the string literal passed to
cmd.Flags().String where FlagDBPath and def.DBPath are used to fix the typo.
---
Nitpick comments:
In `@block/internal/syncing/syncer.go`:
- Around line 367-374: The manual DA start-height override
(s.config.DA.StartHeight) must be validated: ensure startHeight >=
s.genesis.DAStartHeight and startHeight <= daHeight (the recovered watermark)
and reject (return an error) if out of that range; if the value is in-range but
not equal to the current daHeight emit a warning via s.logger.Warn() that this
is an unsafe jump/override; keep the existing info log when accepted but do not
silently accept impossible values—use s.logger to warn and return a clear error
when validation fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 63b1d6c5-1133-4102-8ea6-6061279613ef
📒 Files selected for processing (2)
block/internal/syncing/syncer.gopkg/config/config.go
| // Set DA height to the maximum of the genesis start height, the state's DA height, and the cached DA height. | ||
| // Use the DA height from the last executed block instead of the maximum from all blocks, | ||
| // because P2P-fetched heights may be lost on restart. | ||
| daHeight := max(s.genesis.DAStartHeight, min(state.DAHeight-1, 0)) | ||
| if state.LastBlockHeight > 0 { | ||
| if lastHeaderDA, ok := s.cache.GetHeaderDAIncludedByHeight(state.LastBlockHeight); ok { | ||
| daHeight = max(daHeight, lastHeaderDA) | ||
| } | ||
| if lastDataDA, ok := s.cache.GetDataDAIncludedByHeight(state.LastBlockHeight); ok { | ||
| daHeight = max(daHeight, lastDataDA) | ||
| } | ||
| // The cache's DaHeight() is initialized from store metadata, so it's always correct even after cache clear. | ||
| // Only use cache.DaHeight() when P2P is actively syncing (headerStore has higher height than current state). | ||
| daHeight := s.genesis.DAStartHeight | ||
| if state.DAHeight > s.genesis.DAStartHeight { | ||
| daHeight = max(daHeight, state.DAHeight-1) | ||
| } | ||
| if s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight { | ||
| daHeight = max(daHeight, s.cache.DaHeight()) | ||
| } |
There was a problem hiding this comment.
Resume from the last applied block's DA height, not the cache-wide max.
block/internal/cache/manager.go:174-176 defines DaHeight() as the highest DA height ever seen across the caches, while block/internal/submitting/submitter.go:440-458 still persists per-block inclusion heights. Using the global max here can jump the retriever past untouched DA ranges whenever later submissions are already in cache metadata, and those skipped heights are no longer covered by the sequential catchup path when P2P priority hints are dropped. Please derive this bump from the included-by-height entries for state.LastBlockHeight, or leave the recovered watermark unchanged when they are missing.
🛠️ Safer resume logic
- if s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight {
- daHeight = max(daHeight, s.cache.DaHeight())
- }
+ if s.headerStore != nil && s.headerStore.Height() > state.LastBlockHeight {
+ if headerDAHeight, ok := s.cache.GetHeaderDAIncludedByHeight(state.LastBlockHeight); ok {
+ daHeight = max(daHeight, headerDAHeight)
+ }
+ if dataDAHeight, ok := s.cache.GetDataDAIncludedByHeight(state.LastBlockHeight); ok {
+ daHeight = max(daHeight, dataDAHeight)
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@block/internal/syncing/syncer.go` around lines 356 - 365, The current logic
uses s.cache.DaHeight() (a global max) to bump daHeight when headerStore is
ahead, which can skip DA ranges; instead, derive the bump from the per-block
included-by entry for state.LastBlockHeight (the persisted per-block inclusion
height written by the submitter) or leave daHeight unchanged if that per-block
entry is missing. Concretely: replace the branch that uses s.cache.DaHeight()
with a lookup of the included-height for state.LastBlockHeight (or call the
cache/manager API that returns the included-by height for a specific block) and
max that value into daHeight only when that per-block value exists; otherwise do
not use the cache-wide s.cache.DaHeight(). Ensure you still respect
s.genesis.DAStartHeight and the state.DAHeight-1 logic.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
block/internal/cache/pending_events_map_test.go (1)
20-75: Consider table-driving these pending-event scenarios.The three tests all exercise the same insert/update/delete surface; a single table would remove repeated setup and make it easier to add edge cases like missing heights or overwrite-then-delete.
As per coding guidelines, "Use table-driven tests in Go unit tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/cache/pending_events_map_test.go` around lines 20 - 75, The three tests TestPendingEventsMap_BasicCRUD, TestPendingEventsMap_UpdateExisting and TestPendingEventsMap_DeleteAllForHeight duplicate setup and exercising of newPendingEventsMap, setItem, getNextItem, deleteAllForHeight and itemCount; convert them into a single table-driven test that enumerates cases (e.g., insert sequence + expects, update-then-expect, delete-for-height + expects) and iterates over each case asserting counts and returned events, collapsing duplicated setup into a loop over test cases to improve readability and make adding edge cases simpler.block/internal/cache/generic_cache_test.go (1)
54-190: Consider table-driving the restore permutations.The restore tests from Line 54 through Line 190 only vary by seeded snapshot and expected heights/counts. Collapsing them into a table will cut the copy/paste and make it easier to add the same-height replacement case this cache should cover.
As per coding guidelines, "Use table-driven tests in Go unit tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/cache/generic_cache_test.go` around lines 54 - 190, Several nearly-identical tests (TestCache_RestoreFromStore_EmptyChain, _FullyFinalized, _InFlightWindow, _SingleEntry, _PlaceholderOverwrittenByRealHash, _RoundTrip) repeat snapshot seeding and assertions; convert them into a single table-driven test that iterates cases describing the snapshot entries, actions (e.g., writeSnapshot, setDAIncluded, removeDAIncluded, SaveToStore), and expected daIncludedLen/daHeight and presence checks. Implement a new TestCache_RestoreFromStore_TableDriven that constructs NewCache(st, prefix) per case, uses writeSnapshot/st ops and optional pre/post actions, calls RestoreFromStore(ctx), and runs the same assertions currently in each specialized test (refer to RestoreFromStore, NewCache, writeSnapshot, setDAIncluded, removeDAIncluded, SaveToStore, getDAIncluded/getDAIncludedByHeight). Keep each case small and name them to cover the existing scenarios including the same-height replacement case.block/internal/cache/manager.go (1)
387-390: Clear the TX timestamp index alongsidetxCache.
ClearFromStorerebuildsm.txCachebut leaves the oldm.txTimestampsmap alive. If this helper is reused after startup,CleanupOldTxswill still traverse stale hashes from before the clear. Reinitialize the timestamp map here so the in-memory TX state is fully reset.Suggested change
m.headerCache = NewCache(m.store, HeaderDAIncludedPrefix) m.dataCache = NewCache(m.store, DataDAIncludedPrefix) m.txCache = NewCache(nil, "") +m.txTimestamps = new(sync.Map) m.pendingEvents = newPendingEventsMap[common.DAHeightEvent]()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/cache/manager.go` around lines 387 - 390, The tx timestamp index (m.txTimestamps) isn't reinitialized when rebuilding m.txCache, causing CleanupOldTxs to iterate stale entries; in the same place where you set m.txCache = NewCache(nil, "") (inside ClearFromStore), reinitialize the timestamp map (m.txTimestamps) to an empty/new map/structure so the in-memory TX state is fully reset alongside m.txCache and prevent traversal of old hashes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@block/internal/cache/generic_cache.go`:
- Around line 29-31: hashByHeight is mixing two responsibilities (tracking every
seen hash per height for cleanup and storing the single DA-included hash),
causing overwrites and stale entries; split it into a DA-specific index (e.g.,
daHashByHeight map[uint64]string) and a seen-index (e.g., seenByHeight
map[uint64]map[string]bool) and update code paths that touch hashByHeight: when
adding a hash record, insert into seenByHeight[height][hash]=true; when marking
DA inclusion update daIncluded and daHashByHeight[height]=hash; change
getDAIncludedByHeight to consult daHashByHeight/daIncluded instead of
hashByHeight; change deleteAllForHeight to iterate seenByHeight[height] to
remove all hashes and their entries from hashes/daIncluded/daHashByHeight, and
remove empty maps to avoid leaks; update any uses of hashByHeight in functions
add/delete/get (including the code around getDAIncludedByHeight and
deleteAllForHeight) to use the new structures accordingly.
---
Nitpick comments:
In `@block/internal/cache/generic_cache_test.go`:
- Around line 54-190: Several nearly-identical tests
(TestCache_RestoreFromStore_EmptyChain, _FullyFinalized, _InFlightWindow,
_SingleEntry, _PlaceholderOverwrittenByRealHash, _RoundTrip) repeat snapshot
seeding and assertions; convert them into a single table-driven test that
iterates cases describing the snapshot entries, actions (e.g., writeSnapshot,
setDAIncluded, removeDAIncluded, SaveToStore), and expected
daIncludedLen/daHeight and presence checks. Implement a new
TestCache_RestoreFromStore_TableDriven that constructs NewCache(st, prefix) per
case, uses writeSnapshot/st ops and optional pre/post actions, calls
RestoreFromStore(ctx), and runs the same assertions currently in each
specialized test (refer to RestoreFromStore, NewCache, writeSnapshot,
setDAIncluded, removeDAIncluded, SaveToStore,
getDAIncluded/getDAIncludedByHeight). Keep each case small and name them to
cover the existing scenarios including the same-height replacement case.
In `@block/internal/cache/manager.go`:
- Around line 387-390: The tx timestamp index (m.txTimestamps) isn't
reinitialized when rebuilding m.txCache, causing CleanupOldTxs to iterate stale
entries; in the same place where you set m.txCache = NewCache(nil, "") (inside
ClearFromStore), reinitialize the timestamp map (m.txTimestamps) to an empty/new
map/structure so the in-memory TX state is fully reset alongside m.txCache and
prevent traversal of old hashes.
In `@block/internal/cache/pending_events_map_test.go`:
- Around line 20-75: The three tests TestPendingEventsMap_BasicCRUD,
TestPendingEventsMap_UpdateExisting and TestPendingEventsMap_DeleteAllForHeight
duplicate setup and exercising of newPendingEventsMap, setItem, getNextItem,
deleteAllForHeight and itemCount; convert them into a single table-driven test
that enumerates cases (e.g., insert sequence + expects, update-then-expect,
delete-for-height + expects) and iterates over each case asserting counts and
returned events, collapsing duplicated setup into a loop over test cases to
improve readability and make adding edge cases simpler.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4c6f0031-ff90-4e05-aaf8-a7443e244ed5
📒 Files selected for processing (5)
block/internal/cache/generic_cache.goblock/internal/cache/generic_cache_test.goblock/internal/cache/manager.goblock/internal/cache/pending_events_map.goblock/internal/cache/pending_events_map_test.go
| hashes map[string]bool | ||
| daIncluded map[string]uint64 | ||
| hashByHeight map[uint64]string |
There was a problem hiding this comment.
Split the height index by responsibility.
hashByHeight is doing two incompatible jobs: cleanup for every seen hash and lookup for the DA-included hash at a height. Once two hashes share a block height, later writes overwrite earlier ones, so getDAIncludedByHeight can follow the wrong hash and deleteAllForHeight can only purge one of them. With the LRU gone, those superseded hashes stop aging out. Keep a DA-specific height index, and track seen hashes per height separately.
Also applies to: 61-65, 81-89, 95-103, 128-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@block/internal/cache/generic_cache.go` around lines 29 - 31, hashByHeight is
mixing two responsibilities (tracking every seen hash per height for cleanup and
storing the single DA-included hash), causing overwrites and stale entries;
split it into a DA-specific index (e.g., daHashByHeight map[uint64]string) and a
seen-index (e.g., seenByHeight map[uint64]map[string]bool) and update code paths
that touch hashByHeight: when adding a hash record, insert into
seenByHeight[height][hash]=true; when marking DA inclusion update daIncluded and
daHashByHeight[height]=hash; change getDAIncludedByHeight to consult
daHashByHeight/daIncluded instead of hashByHeight; change deleteAllForHeight to
iterate seenByHeight[height] to remove all hashes and their entries from
hashes/daIncluded/daHashByHeight, and remove empty maps to avoid leaks; update
any uses of hashByHeight in functions add/delete/get (including the code around
getDAIncludedByHeight and deleteAllForHeight) to use the new structures
accordingly.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
block/internal/cache/manager.go (1)
399-403: Consider acquiringpendingMuwhen reassigningpendingEvents.The
pendingEventsmap is reassigned without holdingpendingMu. Currently safe becauseClearFromStoreis only called during initialization inNewManagerbefore the instance is exposed. However, since this is an exported method, adding the lock would provide defensive protection against future misuse.🛡️ Suggested defensive fix
m.headerCache = NewCache(m.store, HeaderDAIncludedPrefix) m.dataCache = NewCache(m.store, DataDAIncludedPrefix) m.txCache = NewCache(nil, "") +m.pendingMu.Lock() m.pendingEvents = make(map[uint64]*common.DAHeightEvent) +m.pendingMu.Unlock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/cache/manager.go` around lines 399 - 403, When reassigning the pendingEvents map in ClearFromStore, acquire the pendingMu lock to guard the mutation: wrap the reassignment (m.pendingEvents = make(map[uint64]*common.DAHeightEvent)) with m.pendingMu.Lock() and defer m.pendingMu.Unlock() so the map is protected; do the same defensive locking in any other method that mutates pendingEvents (e.g., ensure NewManager and ClearFromStore both respect pendingMu) to prevent races if ClearFromStore becomes callable after initialization.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@block/internal/cache/manager.go`:
- Around line 399-403: When reassigning the pendingEvents map in ClearFromStore,
acquire the pendingMu lock to guard the mutation: wrap the reassignment
(m.pendingEvents = make(map[uint64]*common.DAHeightEvent)) with
m.pendingMu.Lock() and defer m.pendingMu.Unlock() so the map is protected; do
the same defensive locking in any other method that mutates pendingEvents (e.g.,
ensure NewManager and ClearFromStore both respect pendingMu) to prevent races if
ClearFromStore becomes callable after initialization.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1bba03bd-37a1-4654-8db8-5deb1f86a313
📒 Files selected for processing (1)
block/internal/cache/manager.go
Overview
My previous PR got merged with a test commit (longer version of this). This should have been merged instead.
Same behavior.
Summary by CodeRabbit
New Features
evnode.da.start_heightconfiguration flag to override Data Availability start height.Refactor