Skip to content

Conversation

colinlyguo
Copy link
Contributor

@colinlyguo colinlyguo commented Aug 11, 2025

Purpose or design rationale of this PR

Considering fetching L1 messages from the database, which is not included in blobs.

PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • fix: A bug fix

Deployment tag versioning

Has tag in common/version.go been updated or have you added bump-version label to this PR?

  • Yes

Breaking change label

Does this PR have the breaking-change label?

  • No, this PR is not a breaking change

Summary by CodeRabbit

  • New Features

    • Assembled blocks now include L1 messages so batches reflect on-chain ordering.
  • Bug Fixes

    • Added database-consistency validation to reduce mismatches during batch verification.
    • Enforced L1-before-L2 ordering within blocks.
  • Refactor

    • Improved calldata decoding and validation flow using ABI-aware parsing for stronger sanity checks.
  • Tests

    • New end-to-end test validating DABatch assembly from commit-batch blobs.
  • Chores

    • Updated version to v4.5.40.

@colinlyguo colinlyguo added the bump-version Bump the version tag for deployment label Aug 11, 2025
Copy link

coderabbitai bot commented Aug 11, 2025

Warning

Rate limit exceeded

@colinlyguo has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 9 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between d6e03e1 and 7b20c8a.

📒 Files selected for processing (1)
  • rollup/internal/controller/relayer/l2_relayer_sanity.go (11 hunks)

Walkthrough

Refactors commitBatches calldata parsing to an ABI-driven top-level parser, introduces exported CalldataInfo, threads a map of L1 messages-by-block through validation and DABatch/block assembly, adds a database-consistency validation, adds an end-to-end DABatch assembly test and testdata, and bumps version v4.5.39 → v4.5.40.

Changes

Cohort / File(s) Summary
Version bump
common/version/version.go
Update tag from v4.5.39 to v4.5.40; no logic changes.
Relayer sanity parsing & validation
rollup/internal/controller/relayer/l2_relayer_sanity.go
Move calldata parsing to top-level parseCommitBatchesCalldata(abi, calldata) (adds abi import). Add exported CalldataInfo type. getBatchesFromCalldata now returns map[uint64][]*types.TransactionData (L1 messages by block). Thread l1MessagesWithBlockNumbers through validateCalldataAndBlobsAgainstDatabase, validateSingleBlobAgainstBatch, assembleDABatchFromPayload, and assembleBlocksFromPayload. Add validateDatabaseConsistency call and enforce L1-before-L2 ordering when assembling blocks.
Tests & testdata
rollup/internal/controller/relayer/l2_relayer_sanity_test.go, rollup/internal/controller/relayer/testdata/.../commit_batches_blobs.json
Add TestAssembleDABatch and helper loadBlobsFromJSON to exercise end-to-end DABatch assembly from blobs using parsed calldata, codec selection, and L1 message mapping; include JSON of hex-encoded blobs.

Sequence Diagram(s)

sequenceDiagram
  participant Caller
  participant ABI
  participant Relayer
  participant DB
  participant Codec

  Caller->>ABI: parseCommitBatchesCalldata(abi, calldata)
  ABI-->>Relayer: CalldataInfo{Version, ParentBatchHash, LastBatchHash}

  Relayer->>DB: getBatchesFromCalldata(info) -> fetch L2 blocks
  DB-->>Relayer: blocks + L1 messages per block
  Relayer-->>Relayer: l1MessagesWithBlockNumbers map

  Relayer->>Relayer: validateDatabaseConsistency(batchesToValidate)
  loop per blob
    Relayer->>Codec: decode blob -> DABlobPayload
    Relayer->>Relayer: validateSingleBlobAgainstBatch(..., l1MessagesWithBlockNumbers)
    Relayer->>Relayer: assembleDABatchFromPayload(..., l1MessagesWithBlockNumbers)
    Relayer-->>Caller: DABatch (blocks include L1 messages)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~35 minutes

Possibly related PRs

Suggested reviewers

  • jonastheis
  • georgehao
  • 0xmountaintop

Poem

A rabbit parses calldata bright,
Hops L1 messages into sight.
Blobs click-clack and blocks align,
Version bumps — the hashes shine.
Thump-thump, the relayer's delight. 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-rollup-relayer-sanity-checks

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@colinlyguo colinlyguo force-pushed the fix-rollup-relayer-sanity-checks branch from 76cd3b3 to 819f4ce Compare August 11, 2025 19:51
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
rollup/internal/controller/relayer/l2_relayer_sanity_test.go (2)

102-102: Use idiomatic Go increment operator.

-		index += 1
+		index++

46-82: Consider extracting hardcoded test data to fixtures.

The test contains significant hardcoded data (addresses, transaction data). Consider moving this to a test fixture file for better maintainability and reusability.

Would you like me to help refactor this test to use external test fixtures?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c012f71 and 819f4ce.

📒 Files selected for processing (3)
  • common/version/version.go (1 hunks)
  • rollup/internal/controller/relayer/l2_relayer_sanity.go (11 hunks)
  • rollup/internal/controller/relayer/l2_relayer_sanity_test.go (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-07-29T16:38:24.647Z
Learnt from: colinlyguo
PR: scroll-tech/scroll#1714
File: rollup/internal/controller/relayer/l2_relayer.go:1548-1555
Timestamp: 2025-07-29T16:38:24.647Z
Learning: In rollup/internal/controller/relayer/l2_relayer.go, the validateBatchFields function should error out when GetBatchByHash fails to find a parent batch. This is intentional behavior - missing parent batches represent genuine error conditions that should halt batch submission processing. Genesis batch handling occurs separately from normal batch validation flow.

Applied to files:

  • rollup/internal/controller/relayer/l2_relayer_sanity_test.go
  • rollup/internal/controller/relayer/l2_relayer_sanity.go
📚 Learning: 2025-07-07T12:02:52.214Z
Learnt from: colinlyguo
PR: scroll-tech/scroll#1693
File: rollup/abi/validium_abi.go:13-15
Timestamp: 2025-07-07T12:02:52.214Z
Learning: In the Scroll codebase, ABI initialization in files like bridge_abi.go and validium_abi.go follows a consistent pattern of ignoring errors from GetAbi() in init functions to maintain code style consistency across the project.

Applied to files:

  • rollup/internal/controller/relayer/l2_relayer_sanity.go
🧬 Code Graph Analysis (2)
rollup/internal/controller/relayer/l2_relayer_sanity_test.go (3)
rollup/abi/bridge_abi.go (1)
  • ScrollChainABI (10-10)
common/version/version.go (1)
  • Version (31-31)
coordinator/internal/types/auth.go (1)
  • Hash (134-141)
rollup/internal/controller/relayer/l2_relayer_sanity.go (2)
rollup/internal/controller/relayer/l2_relayer.go (1)
  • Layer2Relayer (66-97)
rollup/internal/orm/batch.go (2)
  • Batch (25-75)
  • Batch (83-85)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
🔇 Additional comments (4)
common/version/version.go (1)

8-8: Version bump looks good, but update the PR checklist.

The version has been correctly incremented from v4.5.39 to v4.5.40. However, the PR checklist indicates the "tag in common/version.go has been updated" checkbox is unchecked - please check this box since you've updated the version.

rollup/internal/controller/relayer/l2_relayer_sanity.go (3)

49-82: Good refactor to top-level function.

Moving parseCommitBatchesCalldata to a top-level function improves reusability (as demonstrated in the test file) and follows good separation of concerns.


146-420: Excellent comprehensive validation logic.

The new database consistency validation functions provide thorough checks with clear error messages that will greatly help with debugging. The validation covers:

  • Batch and chunk continuity
  • Codec version consistency
  • L1 message queue hash consistency
  • Essential field validation

This significantly improves the robustness of the sanity checks.


464-467: Critical: L1 messages assignment will be overwritten.

The L1 messages are assigned to blocks[i].Transactions on line 465, but this assignment is immediately overwritten by the append operation on line 467. The L1 messages should be included in the append operation.

-		if l1Messages, ok := l1MessagesWithBlockNumbers[daBlocks[i].Number()]; ok {
-			blocks[i].Transactions = l1Messages
-		}
-		blocks[i].Transactions = append(blocks[i].Transactions, encoding.TxsToTxsData(txns[i])...)
+		if l1Messages, ok := l1MessagesWithBlockNumbers[daBlocks[i].Number()]; ok {
+			blocks[i].Transactions = append(l1Messages, encoding.TxsToTxsData(txns[i])...)
+		} else {
+			blocks[i].Transactions = encoding.TxsToTxsData(txns[i])
+		}
⛔ Skipped due to learnings
Learnt from: colinlyguo
PR: scroll-tech/scroll#1714
File: rollup/internal/controller/relayer/l2_relayer.go:1548-1555
Timestamp: 2025-07-29T16:38:24.647Z
Learning: In rollup/internal/controller/relayer/l2_relayer.go, the validateBatchFields function should error out when GetBatchByHash fails to find a parent batch. This is intentional behavior - missing parent batches represent genuine error conditions that should halt batch submission processing. Genesis batch handling occurs separately from normal batch validation flow.

@codecov-commenter
Copy link

codecov-commenter commented Aug 11, 2025

Codecov Report

❌ Patch coverage is 37.73585% with 33 lines in your changes missing coverage. Please review.
✅ Project coverage is 37.48%. Comparing base (c012f71) to head (7b20c8a).

Files with missing lines Patch % Lines
...p/internal/controller/relayer/l2_relayer_sanity.go 37.73% 29 Missing and 4 partials ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1720      +/-   ##
===========================================
- Coverage    37.51%   37.48%   -0.03%     
===========================================
  Files          243      243              
  Lines        20466    20499      +33     
===========================================
+ Hits          7678     7685       +7     
- Misses       11981    12005      +24     
- Partials       807      809       +2     
Flag Coverage Δ
rollup 37.57% <37.73%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

yiweichi
yiweichi previously approved these changes Aug 11, 2025
@colinlyguo colinlyguo force-pushed the fix-rollup-relayer-sanity-checks branch from dca5306 to d6e03e1 Compare August 12, 2025 03:45
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (2)
rollup/internal/controller/relayer/l2_relayer_sanity.go (2)

49-82: Harden calldata parsing: method existence, selector check, calldata length, and avoid package shadowing

  • Avoid shadowing the imported abi package with a parameter named abi.
  • Validate rollupABI != nil.
  • Ensure the calldata is at least 4 bytes and the 4-byte selector matches commitBatches.
  • Check that the ABI actually contains commitBatches before unpacking.

These guardrails prevent panics and mis-decoding.

Apply this diff:

-// parseCommitBatchesCalldata parses the commitBatches calldata and extracts key information
-func parseCommitBatchesCalldata(abi *abi.ABI, calldata []byte) (*CalldataInfo, error) {
-	method := abi.Methods["commitBatches"]
-	decoded, err := method.Inputs.Unpack(calldata[4:])
+// parseCommitBatchesCalldata parses the commitBatches calldata and extracts key information
+func parseCommitBatchesCalldata(rollupABI *abi.ABI, calldata []byte) (*CalldataInfo, error) {
+	if rollupABI == nil {
+		return nil, fmt.Errorf("nil ABI: cannot parse commitBatches calldata")
+	}
+	method, ok := rollupABI.Methods["commitBatches"]
+	if !ok {
+		return nil, fmt.Errorf("ABI missing method commitBatches")
+	}
+	if len(calldata) < 4 {
+		return nil, fmt.Errorf("calldata too short: need at least 4 bytes for selector, got %d", len(calldata))
+	}
+	if !bytes.Equal(calldata[:4], method.ID) {
+		return nil, fmt.Errorf("function selector mismatch: got 0x%x, want 0x%x (commitBatches)", calldata[:4], method.ID)
+	}
+	decoded, err := method.Inputs.Unpack(calldata[4:])
 	if err != nil {
 		return nil, fmt.Errorf("failed to unpack commitBatches calldata: %w", err)
 	}
 
 	if len(decoded) != 3 {
 		return nil, fmt.Errorf("unexpected number of decoded parameters: got %d, want 3", len(decoded))
 	}
 
 	version, ok := decoded[0].(uint8)
 	if !ok {
 		return nil, fmt.Errorf("failed to type assert version to uint8")
 	}
 
-	parentBatchHashB, ok := decoded[1].([32]uint8)
+	parentBatchHashB, ok := decoded[1].([32]uint8)
 	if !ok {
 		return nil, fmt.Errorf("failed to type assert parentBatchHash to [32]uint8")
 	}
 	parentBatchHash := common.BytesToHash(parentBatchHashB[:])
 
-	lastBatchHashB, ok := decoded[2].([32]uint8)
+	lastBatchHashB, ok := decoded[2].([32]uint8)
 	if !ok {
 		return nil, fmt.Errorf("failed to type assert lastBatchHash to [32]uint8")
 	}
 	lastBatchHash := common.BytesToHash(lastBatchHashB[:])
 
 	return &CalldataInfo{
 		Version:         version,
 		ParentBatchHash: parentBatchHash,
 		LastBatchHash:   lastBatchHash,
 	}, nil
 }

Additionally, add the missing import:

// at top of file
import (
    "bytes"
    // existing imports...
)

160-167: Avoid potential panic when accessing the first chunk

batchesToValidate[0].Chunks[0] is accessed before verifying the batch actually has chunks. Add a guard for len(batchesToValidate[0].Chunks) == 0 before indexing.

Apply this diff:

 	// Get previous chunk for continuity check
-	firstChunk := batchesToValidate[0].Chunks[0]
+	if len(batchesToValidate[0].Chunks) == 0 {
+		return fmt.Errorf("first batch %d has no chunks", batchesToValidate[0].Batch.Index)
+	}
+	firstChunk := batchesToValidate[0].Chunks[0]
 	if firstChunk.Index == 0 {
 		return fmt.Errorf("genesis chunk should not be in normal batch submission flow, chunk index: %d", firstChunk.Index)
 	}
 
 	prevChunk, err := r.chunkOrm.GetChunkByIndex(r.ctx, firstChunk.Index-1)
♻️ Duplicate comments (1)
rollup/internal/controller/relayer/l2_relayer_sanity.go (1)

126-149: Validate L1 message counts per chunk and include block hash in invariant error

Two improvements:

  • Cross-check that the number of collected L1 messages across the chunk’s block range equals chunk.TotalL1MessagesPoppedInChunk. This catches DB/code drift early.
  • Include the block hash in the invariant violation for easier debugging (if available on the returned block type).

Also see past review about early break after first L2 tx; if you still want that micro-optimization, it can be applied without changing behavior.

Apply this diff to add the count check:

 		for _, chunk := range dbChunks {
 			if chunk.TotalL1MessagesPoppedInChunk > 0 {
 				blockWithL1Messages, err := r.l2BlockOrm.GetL2BlocksInRange(r.ctx, chunk.StartBlockNumber, chunk.EndBlockNumber)
 				if err != nil {
 					return nil, nil, fmt.Errorf("failed to get L2 blocks for chunk %d: %w", chunk.Index, err)
 				}
+				var collected uint64
 				for _, block := range blockWithL1Messages {
 					bn := block.Header.Number.Uint64()
 					seenL2 := false
 					for _, tx := range block.Transactions {
 						if tx.Type == types.L1MessageTxType {
 							if seenL2 {
-								// Invariant violated: found an L1 after an L2 in the same block.
-								return nil, nil, fmt.Errorf("L1 message after L2 tx in block %d", bn)
+								// Invariant violated: found an L1 after an L2 in the same block.
+								return nil, nil, fmt.Errorf("L1 message after L2 tx in block %d (hash=%s)", bn, block.Header.Hash().Hex())
 							}
 							l1MessagesWithBlockNumbers[bn] = append(l1MessagesWithBlockNumbers[bn], tx)
+							collected++
 						} else {
 							seenL2 = true
 						}
 					}
 				}
+				if collected != chunk.TotalL1MessagesPoppedInChunk {
+					return nil, nil, fmt.Errorf("chunk %d L1 message count mismatch: collected=%d, expected=%d (block_range=%d-%d)",
+						chunk.Index, collected, chunk.TotalL1MessagesPoppedInChunk, chunk.StartBlockNumber, chunk.EndBlockNumber)
+				}
 			}
 		}

Optional micro-optimization (keeps the invariant check but reduces inner work): stop scanning the remainder of the block once the first L2 tx is seen since any later L1 would already be caught by the existing check.

-					for _, tx := range block.Transactions {
+					for _, tx := range block.Transactions {
 						if tx.Type == types.L1MessageTxType {
 							if seenL2 {
 								// Invariant violated: found an L1 after an L2 in the same block.
 								return nil, nil, fmt.Errorf("L1 message after L2 tx in block %d (hash=%s)", bn, block.Header.Hash().Hex())
 							}
 							l1MessagesWithBlockNumbers[bn] = append(l1MessagesWithBlockNumbers[bn], tx)
 							collected++
 						} else {
 							seenL2 = true
+							// We can break here; any subsequent L1 would be caught at the top of a new iteration
+							// if the invariant is ever broken in another context.
+							break
 						}
 					}
🧹 Nitpick comments (3)
rollup/internal/controller/relayer/l2_relayer_sanity.go (3)

349-356: Strict equality checks are good; consider logging actual vs. expected batch indices as context

The mismatch errors are clear. If these errors surface in production, having both indexes and hashes is already helpful. No action required.


456-476: Transaction assembly: ensure no duplication and order correctness

  • Prepending per-block L1 messages and appending on-chain transactions from the payload preserves the intended order.
  • Consider optionally asserting that encoding.TxsToTxsData(txns[i]) contains no L1 messages (if the codec guarantees this invariant). A quick defensive assertion would tighten checks.

If helpful, I can add a guard like:

for _, t := range encoding.TxsToTxsData(txns[i]) {
    if t.Type == types.L1MessageTxType {
        return nil, fmt.Errorf("unexpected L1 message inside payload txs for block %d", daBlocks[i].Number())
    }
}

Would you like me to wire a unit test to cover this case and improve Codecov on this file?


328-371: Good cross-checks across calldata/db/blobs; consider adding tests for error paths

The validations on codec version, parent/last batch hashes, and blob count are solid. Given Codecov feedback, adding unit tests for:

  • mismatched selector/version,
  • parent/last batch mismatch,
  • blob count mismatch,
  • and l1Messages invariant violation
    would meaningfully improve patch coverage.

I can draft focused tests in l2_relayer_sanity_test.go to exercise these error paths. Want me to push those?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 20ccb12 and d6e03e1.

📒 Files selected for processing (1)
  • rollup/internal/controller/relayer/l2_relayer_sanity.go (11 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-07-07T12:02:52.214Z
Learnt from: colinlyguo
PR: scroll-tech/scroll#1693
File: rollup/abi/validium_abi.go:13-15
Timestamp: 2025-07-07T12:02:52.214Z
Learning: In the Scroll codebase, ABI initialization in files like bridge_abi.go and validium_abi.go follows a consistent pattern of ignoring errors from GetAbi() in init functions to maintain code style consistency across the project.

Applied to files:

  • rollup/internal/controller/relayer/l2_relayer_sanity.go
📚 Learning: 2025-07-29T16:38:24.647Z
Learnt from: colinlyguo
PR: scroll-tech/scroll#1714
File: rollup/internal/controller/relayer/l2_relayer.go:1548-1555
Timestamp: 2025-07-29T16:38:24.647Z
Learning: In rollup/internal/controller/relayer/l2_relayer.go, the validateBatchFields function should error out when GetBatchByHash fails to find a parent batch. This is intentional behavior - missing parent batches represent genuine error conditions that should halt batch submission processing. Genesis batch handling occurs separately from normal batch validation flow.

Applied to files:

  • rollup/internal/controller/relayer/l2_relayer_sanity.go
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer_sanity.go (2)
rollup/internal/controller/relayer/l2_relayer.go (1)
  • Layer2Relayer (66-97)
rollup/internal/orm/batch.go (2)
  • Batch (25-75)
  • Batch (83-85)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
🔇 Additional comments (3)
rollup/internal/controller/relayer/l2_relayer_sanity.go (3)

85-96: Error messages good; confirm intentional hard-fail on missing parent batch

Returning an error when parent/last batches are not found aligns with prior decisions for batch validation flow (missing parents are error conditions).


239-246: LGTM: correct behavior when parent batch missing

This explicit fetch of the parent batch for the first batch and failing on not found is consistent with the team’s intended behavior (genesis handled separately; missing parents should halt processing).


430-454: Batch assembly looks correct; hashes derived from payload ensure consistency with blob

Constructing the encoding.Batch from payload-derived queue hashes and blocks, then hashing via the codec, is the right way to ensure blob fidelity against DB metadata.

@colinlyguo colinlyguo requested a review from yiweichi August 12, 2025 04:28
@colinlyguo colinlyguo merged commit ae791a0 into develop Aug 12, 2025
6 checks passed
@colinlyguo colinlyguo deleted the fix-rollup-relayer-sanity-checks branch August 12, 2025 06:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bump-version Bump the version tag for deployment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants