Skip to content

Conversation

@DrEverr
Copy link
Member

@DrEverr DrEverr commented Oct 1, 2025

Resolves: #410

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 1, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

📝 Walkthrough

Summary by CodeRabbit

  • Bug Fixes
    • Detects and rejects duplicate service updates within a single request to prevent conflicting actions.
    • Adds validation before and after processing to ensure service list integrity.
    • Returns a clear accumulation error when duplicates are found, improving feedback and stability.
    • Enhances reliability of state updates by blocking inconsistent inputs.

Walkthrough

Adds duplicate-Create detection for UpdateService items in accumulation: introduces a hasDuplicatedServices helper (added twice by mistake), validates before and after accumulation, and returns ACCUMULATION_ERROR on duplicates. Imports UpdateService and UpdateServiceKind from @typeberry/state were added.

Changes

Cohort / File(s) Summary
Accumulation validation and imports
packages/jam/transition/accumulate/accumulate.ts
Added duplicate-Create check via private hasDuplicatedServices(updateServices); invoked pre- and post-accumulation to return ACCUMULATION_ERROR when duplicates by serviceId exist. Imported UpdateService, UpdateServiceKind from @typeberry/state. Note: hasDuplicatedServices defined twice in the file.

Sequence Diagram(s)

sequenceDiagram
  participant Caller
  participant Accumulate
  participant DupCheck as hasDuplicatedServices

  Caller->>Accumulate: accumulate(updates)
  Accumulate->>DupCheck: Check duplicates in updates.services
  alt Duplicates found
    DupCheck-->>Accumulate: true
    Accumulate-->>Caller: ACCUMULATION_ERROR
  else No duplicates
    DupCheck-->>Accumulate: false
    Accumulate->>Accumulate: compute state
    Accumulate->>DupCheck: Check duplicates in state.services
    alt Duplicates found
      DupCheck-->>Accumulate: true
      Accumulate-->>Caller: ACCUMULATION_ERROR
    else None
      DupCheck-->>Accumulate: false
      Accumulate-->>Caller: accumulated state
    end
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • Refactor PartialStateDb #492 — Refactors Accumulate to use stateUpdate.services; overlaps with current PR’s service-update handling and validation flow in the same file.

Suggested reviewers

  • tomusdrw
  • mateuszsikora

Poem

A rabbit taps keys with a careful cheer,
Sniffs out duplicates—“No twins in here!”
Before and after, the checks align,
Errors hop back with a clear red sign.
Two helpers sprout—oops, a mirrored vine—
Snip the twin, and all’s just fine. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Linked Issues Check ⚠️ Warning The pull request adds duplicate service ID detection but does not address the core requirement of failing the entire accumulation when any error occurs during service accumulation, such as handling null stateUpdate cases in accumulateInParallel as specified in issue #410. Update the implementation to replace the continue on null stateUpdate with a complete accumulation failure and ensure that any service accumulation error triggers ACCUMULATION_ERROR in accordance with the linked issue objectives.
Out of Scope Changes Check ⚠️ Warning The file contains two identical definitions of the private hasDuplicatedServices helper, which appears to be an inadvertent duplication and is unrelated to the specified objectives of error handling in the accumulation process. Remove the redundant hasDuplicatedServices definition and ensure only a single implementation exists so that all changes align strictly with the intended objectives.
✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title “Check creation of two services with the same ID” succinctly describes the primary change of detecting duplicate service IDs and is concise and clear enough for teammates to understand the focus of the change.
Description Check ✅ Passed The description “Resolves: #410” directly references the linked issue that the pull request addresses and thus remains on-topic with the changeset.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

Comment @coderabbitai help to get the list of available commands and usage tips.

@DrEverr DrEverr requested a review from tomusdrw October 1, 2025 11:49
@DrEverr DrEverr marked this pull request as ready for review October 1, 2025 15:51
@DrEverr DrEverr requested a review from mateuszsikora October 1, 2025 15:51
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/jam/transition/accumulate/accumulate.ts (2)

311-354: Consensus-critical: accumulation still continues after per-service failure; must abort block.

Loop assigns currentState = stateUpdate === null ? checkpoint : stateUpdate and proceeds. This contradicts #410’s requirement to fail the entire accumulation on any accumulation error, risking unpaid work and consensus divergence.

Proposed minimal change: propagate errors with Result and short-circuit on first failure.

Apply these diffs:

-  private async accumulateInParallel(
+  private async accumulateInParallel(
     accumulateData: AccumulateData,
     slot: TimeSlot,
     entropy: EntropyHash,
     statistics: Map<ServiceId, CountAndGasUsed>,
     inputStateUpdate: AccumulationStateUpdate,
-  ): Promise<ParallelAccumulationResult> {
+  ): Promise<Result<ParallelAccumulationResult, ACCUMULATION_ERROR>> {
@@
-      const { consumedGas, stateUpdate } = await this.accumulateSingleService(
+      const { consumedGas, stateUpdate } = await this.accumulateSingleService(
         serviceId,
         accumulateData.getOperands(serviceId),
         accumulateData.getGasCost(serviceId),
         slot,
         entropy,
         currentState,
       );
@@
-      currentState = stateUpdate === null ? checkpoint : stateUpdate;
+      if (stateUpdate === null) {
+        logger.error`Accumulation failed for ${serviceId}; aborting entire accumulation.`;
+        return Result.error(ACCUMULATION_ERROR);
+      }
+      currentState = stateUpdate;
@@
-    return {
-      state: currentState,
-      gasCost,
-    };
+    return Result.ok({
+      state: currentState,
+      gasCost,
+    });

And propagate in sequential accumulation:

-  private async accumulateSequentially(
+  private async accumulateSequentially(
     gasLimit: ServiceGas,
     reports: WorkReport[],
     slot: TimeSlot,
     entropy: EntropyHash,
     statistics: Map<ServiceId, CountAndGasUsed>,
     stateUpdate: AccumulationStateUpdate,
-  ): Promise<SequentialAccumulationResult> {
+  ): Promise<Result<SequentialAccumulationResult, ACCUMULATION_ERROR>> {
@@
-      return {
+      return Result.ok({
         accumulatedReports: tryAsU32(0),
         gasCost: tryAsServiceGas(0),
         state: stateUpdate,
-      };
+      });
@@
-    const {
-      gasCost,
-      state: stateAfterParallelAcc,
-      ...rest
-    } = await this.accumulateInParallel(accumulateData, slot, entropy, statistics, stateUpdate);
+    const par = await this.accumulateInParallel(accumulateData, slot, entropy, statistics, stateUpdate);
+    if (par.isError) return par as Result<SequentialAccumulationResult, ACCUMULATION_ERROR>;
+    const { gasCost, state: stateAfterParallelAcc, ...rest } = par.ok;
@@
-    const {
-      accumulatedReports,
-      gasCost: seqGasCost,
-      state,
-      ...seqRest
-    } = await this.accumulateSequentially(
+    const seq = await this.accumulateSequentially(
       tryAsServiceGas(gasLimit - gasCost),
       reportsToAccumulateSequentially,
       slot,
       entropy,
       statistics,
       stateAfterParallelAcc,
     );
+    if (seq.isError) return seq;
+    const { accumulatedReports, gasCost: seqGasCost, state, ...seqRest } = seq.ok;
@@
-    return {
+    return Result.ok({
       accumulatedReports: tryAsU32(i + accumulatedReports),
       gasCost: tryAsServiceGas(gasCost + seqGasCost),
       state,
-    };
+    });

And handle Result at the call site:

-    const { accumulatedReports, gasCost, state, ...rest } = await this.accumulateSequentially(
+    const seqRes = await this.accumulateSequentially(
       gasLimit,
       accumulatableReports,
       slot,
       entropy,
       statistics,
       AccumulationStateUpdate.empty(),
     );
+    if (seqRes.isError) return Result.error(ACCUMULATION_ERROR);
+    const { accumulatedReports, gasCost, state, ...rest } = seqRes.ok;

This enforces block invalidation on any accumulation error as per #410.


90-199: Update Graypaper links to current version (0.1.3).

All Graypaper URLs in packages/jam/transition/accumulate/accumulate.ts still use v=0.6.7; change them to v=0.1.3 (occurrences at lines 93, 116, 169, 185, 195, 203, 218, 233, 296, 395, 418).

🧹 Nitpick comments (2)
packages/jam/transition/accumulate/accumulate.ts (2)

431-445: Tighten logging and naming in duplicate-create detector.

  • Use error severity for consensus-invalid condition.
  • Prefer clearer naming: this checks duplicate Create actions, not “duplicated services.”
-  private hasDuplicatedServices(updateServices: UpdateService[]): boolean {
+  private hasDuplicateServiceCreates(updateServices: UpdateService[]): boolean {
@@
-        if (createdServiceIds.has(serviceId)) {
-          logger.log`Duplicated Service creation detected ${serviceId}. Block is invalid.`;
+        if (createdServiceIds.has(serviceId)) {
+          logger.error`Duplicate service creation detected ${serviceId}. Block is invalid.`;
           return true;
         }

Also update the call site accordingly (see next comment).


486-489: Good: block invalidation on duplicate service creates; add tests and fail earlier if possible.

  • Keep this top-level guard. Add an integration test creating two UpdateService.create entries with the same serviceId across different services to assert Result.error(ACCUMULATION_ERROR).
  • Optional: fail faster by tracking created IDs inside accumulateInParallel and aborting immediately when a duplicate is seen (saves PVM work).

Would you like me to add the test scaffolding for this path?

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 135961b and 0637223.

📒 Files selected for processing (1)
  • packages/jam/transition/accumulate/accumulate.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.ts

⚙️ CodeRabbit configuration file

**/*.ts: as conversions must not be used. Suggest using tryAs conversion methods.

**/*.ts: Classes with static Codec field must have private constructor and static create method.

**/*.ts: Casting a bigint (or U64) using Number(x) must have an explanation comment why
it is safe.

**/*.ts: When making changes to code with comments containing links (in classes, constants, methods, etc.)
to graypaper.fluffylabs.dev, ensure those links point to the current version for this update.

Files:

  • packages/jam/transition/accumulate/accumulate.ts
🧠 Learnings (1)
📚 Learning: 2025-06-10T12:10:10.532Z
Learnt from: tomusdrw
PR: FluffyLabs/typeberry#419
File: packages/jam/state-merkleization/serialize-update.ts:115-126
Timestamp: 2025-06-10T12:10:10.532Z
Learning: In packages/jam/state-merkleization/serialize-update.ts, service removal is handled separately from service updates. The UpdateServiceKind enum does not include a Remove variant. Service removals are handled via the servicesRemoved parameter in serializeUpdate() which is processed by serializeRemovedServices(), while service updates/creations are handled via servicesUpdates parameter processed by serializeServiceUpdates().

Applied to files:

  • packages/jam/transition/accumulate/accumulate.ts
🧬 Code graph analysis (1)
packages/jam/transition/accumulate/accumulate.ts (4)
packages/jam/state/in-memory-state.ts (1)
  • updateServices (368-398)
packages/jam/state/state-update.ts (2)
  • UpdateService (114-151)
  • update (129-134)
packages/jam/block/common.ts (1)
  • ServiceId (26-26)
packages/jam/jam-host-calls/logger.ts (1)
  • logger (3-3)
🔇 Additional comments (1)
packages/jam/transition/accumulate/accumulate.ts (1)

32-34: LGTM on imports.

Runtime enum usage (UpdateServiceKind) and type-only import (UpdateService) are appropriate.

@github-actions
Copy link

github-actions bot commented Oct 7, 2025

View all
File Benchmark Ops
codec/bigint.decode.ts[0] decode custom 168616935.46 ±2.59% fastest ✅
codec/bigint.decode.ts[1] decode bigint 106642453.18 ±2% 36.75% slower
codec/encoding.ts[0] manual encode 3293013.06 ±1.13% 19.01% slower
codec/encoding.ts[1] int32array encode 3651173.19 ±4.97% 10.2% slower
codec/encoding.ts[2] dataview encode 4065933.42 ±0.72% fastest ✅
codec/decoding.ts[0] manual decode 22900345.26 ±0.78% 86.11% slower
codec/decoding.ts[1] int32array decode 164880375.17 ±2.58% fastest ✅
codec/decoding.ts[2] dataview decode 162372332.82 ±3.14% 1.52% slower
collections/map-set.ts[0] 2 gets + conditional set 119844.43 ±0.37% fastest ✅
collections/map-set.ts[1] 1 get 1 set 61345.79 ±0.33% 48.81% slower
bytes/hex-from.ts[0] parse hex using Number with NaN checking 138048.49 ±0.55% 84.41% slower
bytes/hex-from.ts[1] parse hex from char codes 885339.15 ±0.31% fastest ✅
bytes/hex-from.ts[2] parse hex from string nibbles 545492.13 ±0.47% 38.39% slower
bytes/hex-to.ts[0] number toString + padding 309221.86 ±0.46% fastest ✅
bytes/hex-to.ts[1] manual 15695.73 ±0.53% 94.92% slower
codec/bigint.compare.ts[0] compare custom 249336408.35 ±5.8% 1.18% slower
codec/bigint.compare.ts[1] compare bigint 252313384.02 ±5.13% fastest ✅
hash/index.ts[0] hash with numeric representation 173.73 ±1.04% 31.11% slower
hash/index.ts[1] hash with string representation 108.12 ±1.01% 57.13% slower
hash/index.ts[2] hash with symbol representation 169.52 ±0.97% 32.78% slower
hash/index.ts[3] hash with uint8 representation 153.38 ±0.57% 39.18% slower
hash/index.ts[4] hash with packed representation 252.18 ±0.74% fastest ✅
hash/index.ts[5] hash with bigint representation 187.62 ±0.32% 25.6% slower
hash/index.ts[6] hash with uint32 representation 193.11 ±0.99% 23.42% slower
math/add_one_overflow.ts[0] add and take modulus 240587442.56 ±6.76% 4.6% slower
math/add_one_overflow.ts[1] condition before calculation 252187790.35 ±6.22% fastest ✅
math/count-bits-u32.ts[0] standard method 91902857.33 ±1.87% 61.3% slower
math/count-bits-u32.ts[1] magic 237466126.52 ±6.5% fastest ✅
math/count-bits-u64.ts[0] standard method 1514820.72 ±0.82% 84.96% slower
math/count-bits-u64.ts[1] magic 10074881.97 ±2.14% fastest ✅
math/mul_overflow.ts[0] multiply and bring back to u32 244410183.98 ±8.32% fastest ✅
math/mul_overflow.ts[1] multiply and take modulus 240759480.31 ±7.76% 1.49% slower
math/switch.ts[0] switch 210789300.83 ±8.14% 16.35% slower
math/switch.ts[1] if 251992199.94 ±4.96% fastest ✅
logger/index.ts[0] console.log with string concat 7542390.15 ±56.35% fastest ✅
logger/index.ts[1] console.log with args 865223.41 ±112.77% 88.53% slower
codec/view_vs_object.ts[0] Get the first field from Decoded 455970.48 ±2.06% fastest ✅
codec/view_vs_object.ts[1] Get the first field from View 98571.32 ±1.19% 78.38% slower
codec/view_vs_object.ts[2] Get the first field as view from View 91868.14 ±1.67% 79.85% slower
codec/view_vs_object.ts[3] Get two fields from Decoded 415140.5 ±1.39% 8.95% slower
codec/view_vs_object.ts[4] Get two fields from View 64991.14 ±0.97% 85.75% slower
codec/view_vs_object.ts[5] Get two fields from materialized from View 138313.18 ±0.8% 69.67% slower
codec/view_vs_object.ts[6] Get two fields as views from View 65685.28 ±0.71% 85.59% slower
codec/view_vs_object.ts[7] Get only third field from Decoded 386947.88 ±1.36% 15.14% slower
codec/view_vs_object.ts[8] Get only third field from View 82761.64 ±1.02% 81.85% slower
codec/view_vs_object.ts[9] Get only third field as view from View 82801.79 ±0.87% 81.84% slower
codec/view_vs_collection.ts[0] Get first element from Decoded 23455.46 ±1.43% 55.02% slower
codec/view_vs_collection.ts[1] Get first element from View 52143.05 ±0.81% fastest ✅
codec/view_vs_collection.ts[2] Get 50th element from Decoded 21456.05 ±1.44% 58.85% slower
codec/view_vs_collection.ts[3] Get 50th element from View 23775.25 ±1.69% 54.4% slower
codec/view_vs_collection.ts[4] Get last element from Decoded 22854.31 ±1.51% 56.17% slower
codec/view_vs_collection.ts[5] Get last element from View 16119.69 ±1.97% 69.09% slower
bytes/compare.ts[0] Comparing Uint32 bytes 14777.48 ±2.52% 5.41% slower
bytes/compare.ts[1] Comparing raw bytes 15621.98 ±3.34% fastest ✅
collections/map_vs_sorted.ts[0] Map 262518.68 ±2.49% fastest ✅
collections/map_vs_sorted.ts[1] Map-array 93436.54 ±1.56% 64.41% slower
collections/map_vs_sorted.ts[2] Array 32101.56 ±3.21% 87.77% slower
collections/map_vs_sorted.ts[3] SortedArray 172093.52 ±1.4% 34.45% slower
hash/blake2b.ts[0] our hasher 1.96 ±4.2% fastest ✅
hash/blake2b.ts[1] blake2b js 0.05 ±0.92% 97.45% slower
crypto/ed25519.ts[0] native crypto 5.64 ±17.8% 81.53% slower
crypto/ed25519.ts[1] wasm lib 10.88 ±0.02% 64.37% slower
crypto/ed25519.ts[2] wasm lib batch 30.54 ±0.58% fastest ✅

Benchmarks summary: 63/63 OK ✅

@tomusdrw tomusdrw merged commit 8fd7637 into main Oct 7, 2025
15 checks passed
@tomusdrw tomusdrw deleted the maso-service-dup branch October 7, 2025 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Consensus-critical: Error handling in accumulation process should fail entire accumulation

4 participants