V10 publisher#97
Conversation
| // clearSharedMemoryAfter controls only whether the REMAINING unpublished triples are also cleared. | ||
| if (publishResult.status === 'confirmed') { | ||
| const swmMetaGraph = this.graphManager.sharedMemoryMetaUri(contextGraphId); | ||
| if (options?.clearSharedMemoryAfter) { |
There was a problem hiding this comment.
🔴 Bug: Moving cleanup behind options?.clearSharedMemoryAfter changes behavior in a broken way: tentative publishes would now clear workspace data too early when the flag is true, and confirmed subset publishes leave already-enshrined roots in workspace when the flag is false. The old publishResult.status === 'confirmed' guard needs to stay, and the flag should only control whether any unpublished remainder is also wiped.
| knownBatchContextGraphs?: Map<string, string>; | ||
| /** Shared write lock map. Pass to SharedMemoryHandler so gossip writes serialize against CAS writes. */ | ||
| /** Shared map of workspace-owned rootEntities per paranet: entity → creatorPeerId. Pass from agent so handler and publisher stay in sync. */ | ||
| workspaceOwnedEntities?: Map<string, Map<string, string>>; |
There was a problem hiding this comment.
🔴 Bug: This PR renames the old workspace API surface instead of aliasing it. packages/agent/src/dkg-agent.ts still passes sharedMemoryOwnedEntities, reads knownBatchContextGraphs, and calls writeToWorkspace / writeConditionalToWorkspace, so @origintrail-official/dkg-agent no longer type-checks. Keep deprecated aliases or migrate the agent in this PR.
| type AsyncLiftPublishSuccess, | ||
| type AsyncLiftPublishFailureInput, | ||
| } from './async-lift-publish-result.js'; | ||
| export { WorkspaceHandler, WorkspaceHandler as SharedMemoryHandler } from './workspace-handler.js'; |
There was a problem hiding this comment.
🔴 Bug: This barrel no longer re-exports ACKCollector, StorageACKHandler, VerifyCollector, VerifyProposalHandler, and buildVerificationMetadata, but packages/agent/src/dkg-agent.ts still imports those symbols from @origintrail-official/dkg-publisher. That turns this rename into a repo-wide build break as soon as the publisher package is rebuilt. Keep the re-exports for compatibility, or migrate all downstream imports in this PR.
| * Validates, stores locally in shared memory + shared-memory metadata, and returns an encoded gossip message. | ||
| * Acquires per-entity write locks to serialize against concurrent CAS writes. | ||
| */ | ||
| async share( |
There was a problem hiding this comment.
🔴 Bug: DKGPublisher no longer exposes the deprecated writeToWorkspace / writeConditionalToWorkspace entry points, but unchanged callers still use them (packages/agent/src/dkg-agent.ts, and the new packages/cli/test/publisher-wallets.test.ts). This PR will not typecheck after the rename unless those wrappers stay in place or every caller is updated here.
| * resolved deterministically by keeping the alphabetically first creator. | ||
| */ | ||
| async reconstructSharedMemoryOwnership(): Promise<number> { | ||
| async reconstructWorkspaceOwnership(): Promise<number> { |
There was a problem hiding this comment.
🔴 Bug: The class now also drops the public draftCreate / draftWrite / draftQuery / draftPromote / draftDiscard API, but packages/agent/src/dkg-agent.ts and packages/publisher/test/draft-lifecycle.test.ts still depend on it. Preserve the draft wrappers or migrate/remove those consumers in the same PR.
branarakic
left a comment
There was a problem hiding this comment.
Review: PR #97 — V10 Publisher
This is a large, ambitious PR (60 files, 6k+ additions) that introduces the async lift publishing system and renames the workspace API surface. Here's my detailed analysis:
Architecture — Async Lift Publisher
The new async publish pipeline is well-structured:
- Control plane: Job state machine persisted in TripleStore (not workspace graphs)
- State transitions: accepted -> claimed -> validated -> broadcast -> included -> finalized
- Wallet locks: Separate graph for ephemeral lease state
- Failure taxonomy: Clean categorization by phase (validation/broadcast/confirmation/recovery)
- Recovery: Startup sweep of stale locks and interrupted jobs
The docs (publish-flow.md) are thorough — the mermaid sequence diagrams and state transition tables are excellent.
Issues
Critical (merge blockers):
-
Breaking API rename without coordination: This renames
sharedMemoryOwnedEntities->workspaceOwnedEntities,knownBatchContextGraphs->knownBatchParanets,SharedMemoryHandlerconstructor option name,conditionalShare->shareConditional, removeswriteToWorkspace/writeConditionalToWorkspacealiases, and drops several barrel exports. Thepackages/agent/src/dkg-agent.tsstill uses the old names. This will break the build. Either update the agent package in this PR, or keep backward-compatible aliases. -
N-Quads double-escaping in
nquads.ts: The bot flagged this correctly.q.objectis already serialized N-Triples syntax. Extracting the lexical form and passing it ton3.literal()will double-escape backslash sequences (\"becomes\\", etc.). This silently corrupts shared-memory data during gossip serialization. Test with objects containing quotes, newlines, or unicode. -
V10 ACK path removed from
publishKnowledgeAssets: The publish method now always callspublishKnowledgeAssetswithout the V10 ACK provider. ButDKGAgentstill builds and passesv10ACKProviderin publish options. Either keep the ACK path in publisher or remove the provider construction from agent. Silent no-op is worse than an explicit error. -
Dropped barrel exports:
ACKCollector,StorageACKHandler,VerifyCollector,VerifyProposalHandler,buildVerificationMetadataare removed from the barrel. If agent or CLI imports these, builds will fail. Verify no external consumers.
Important (should fix):
-
publishFromSharedMemoryremovesclearSharedMemoryAfter: Previously, publish from SWM would clear workspace data after successful publish. This is now gated behind the option, but the option itself seems removed. This means published data lingers in SWM forever, causing duplicate-entity validation failures on subsequent publishes. -
GraphManagermethod renames:ensureContextGraph->ensureParanet,sharedMemoryUri->workspaceGraphUri, etc. These are internal-only changes in this PR, but they create a terminology divergence: the rest of the codebase uses "context graph" (the V10 terminology), while this PR partially reverts to "paranet" naming. Pick one and stick with it. -
Proto field renames without protobuf compatibility:
PublishRequest.paranetId->contextGraphIdandKAUpdateRequest.paranetId->contextGraphId. The protobuf field numbers stay the same, so wire format is compatible. But any TS code referencingmsg.paranetIdwill fail at compile time. The workspace proto rename (WorkspacePublishRequest->SharePublishRequest) also changes field names.
Minor:
-
New
batching.tsutility: Clean implementation, good test coverage. ThesplitOversizedEntitiesoption is a nice touch. -
Publisher wallets CLI: Nice addition. The encrypted wallet store with
publisher-wallets.jsonis practical for devnet/testnet use. -
The
jobSlugderivation is a nice touch for debugging — the format{paranet}/{scope}/{transition}/{opId}/{root-range}is human-readable without sacrificing the opaquejobIdfor primary keys.
Verdict
The async lift publisher architecture is solid and the docs are excellent. However, this PR has several breaking changes that aren't coordinated with the agent package (#1, #3, #4). The N-Quads double-escaping (#2) is a silent data corruption bug. I'd recommend:
- Fix the nquads serialization
- Either update the agent package in this PR or add backward-compatible aliases
- Coordinate the V10 ACK removal with agent
Then this is good to merge.
| const wallet = ethers.Wallet.createRandom(); | ||
| const env = { ...process.env, DKG_HOME: dkgHome, DKG_API_PORT: SMOKE_API_PORT }; | ||
|
|
||
| await execFileAsync('node', [CLI_ENTRY, 'publisher', 'wallet', 'add', wallet.privateKey], { env }); |
There was a problem hiding this comment.
🔴 Bug: This test assumes a dkg publisher ... command tree exists, but this PR never wires one into packages/cli/src/cli.ts (there is no program.command('publisher') anywhere) and startPublisherRuntimeIfEnabled() is also not hooked into daemon startup. As written, the smoke test and the new publisher modules are unreachable from the CLI. Please add the command/daemon integration before landing coverage for it.
| ); | ||
| } | ||
|
|
||
| const quads = await resolveWorkspaceSelection({ |
There was a problem hiding this comment.
🔴 Bug: This resolves lift payloads from the current workspace graph instead of an immutable snapshot of the enqueued operation. If the same roots are overwritten or finalized before the runner reaches the job, the publisher will either emit the newer data or fail because the old roots were removed. Persist the selected quads (or another immutable snapshot handle) at enqueue time and resolve against that snapshot here.
| export { WorkspaceHandler, WorkspaceHandler as SharedMemoryHandler } from './workspace-handler.js'; | ||
| export { UpdateHandler } from './update-handler.js'; | ||
| export { ChainEventPoller, type ChainEventPollerConfig, type OnContextGraphCreated, type OnParanetCreated, type OnCollectionUpdated, type OnAllowListUpdated, type OnProfileEvent, type CursorPersistence } from './chain-event-poller.js'; | ||
| export { ChainEventPoller, type ChainEventPollerConfig, type OnParanetCreated } from './chain-event-poller.js'; |
There was a problem hiding this comment.
🔴 Bug: This also removes several previously public ChainEventPoller types (OnContextGraphCreated, OnCollectionUpdated, OnAllowListUpdated, OnProfileEvent, CursorPersistence) from the package root. That is a source-compatible break for downstream imports from @origintrail-official/dkg-publisher. Keep the old re-exports until you intentionally ship a breaking API change.
| ); | ||
| } | ||
|
|
||
| const quads = await resolveWorkspaceSelection({ |
There was a problem hiding this comment.
🔴 Bug: this resolves the payload from the current shared-memory graph by root, not from the specific shareOperationId that was enqueued. If those roots are rewritten before the worker picks the job up, the async publish will lift newer data (or fail because the old op lost its root links) instead of the exact staged operation the user requested. Persist/read a per-operation snapshot, or make resolution query the quads owned by the recorded share operation.
| }); | ||
|
|
||
| failureState = 'broadcast'; | ||
| const publishResult = await this.publishExecutor({ |
There was a problem hiding this comment.
🔴 Bug: the job is still only validated when the executor runs. If the process crashes after the publish side effects happen but before recordPublishResult() persists anything, startup recovery will reset this job to accepted and it can submit the same publish again. Persist a durable pre-submit/broadcast marker (ideally tied to a deterministic publish operation id) before calling the executor.
| switch (publishResult.status) { | ||
| case 'tentative': | ||
| return { | ||
| status: 'included', |
There was a problem hiding this comment.
🔴 Bug: mapping a tentative canonical publish to included creates a state the default runner never makes progress on. AsyncLiftRunner only processes accepted jobs, and the CLI runtime in this PR hardcodes hasIncludedRecoveryResolver: false, so one tentative result can leave the wallet locked until manual cleanup and even block the runner from restarting. Either finalize/reset tentative results here, or require and wire included-job recovery before persisting this state.
| export { WorkspaceHandler, WorkspaceHandler as SharedMemoryHandler } from './workspace-handler.js'; | ||
| export { UpdateHandler } from './update-handler.js'; | ||
| export { ChainEventPoller, type ChainEventPollerConfig, type OnContextGraphCreated, type OnParanetCreated, type OnCollectionUpdated, type OnAllowListUpdated, type OnProfileEvent, type CursorPersistence } from './chain-event-poller.js'; | ||
| export { ChainEventPoller, type ChainEventPollerConfig, type OnParanetCreated } from './chain-event-poller.js'; |
There was a problem hiding this comment.
🔴 Bug: this narrows the public chain-poller surface to OnParanetCreated only. Existing consumers importing OnContextGraphCreated, OnCollectionUpdated, OnAllowListUpdated, OnProfileEvent, or CursorPersistence from the package root will break even though those types still exist in chain-event-poller.ts. Re-export the previous names here for backward compatibility.
|
ACK collection gap on the async path. The async lift publisher uses Root cause: architectural gap, not a design flaw. The |
| .action(async (opts: ActionOpts) => { | ||
| try { | ||
| const config = await loadConfig(); | ||
| config.publisher = { |
There was a problem hiding this comment.
🔴 Bug: This writes config.publisher, but nothing in the daemon startup path reads that config or calls startPublisherRuntimeIfEnabled() in this PR. dkg publisher enable will report success while queued jobs never get processed. Wire the runner into daemon start/stop or hold this command until the runtime is actually active.
| splitOversizedEntities?: boolean; | ||
| } | ||
|
|
||
| export function batchEntityQuads( |
There was a problem hiding this comment.
🔴 Bug: This batching helper is never wired into the real CLI publish/share paths in this PR; they still call publishEntityBatches() in cli.ts, which only limits by quad count. Large RDF writes can still exceed the 512 KB shared-memory message limit, so the regression this is trying to fix remains in production until those call sites switch over.
| } | ||
|
|
||
| const jobId = await inspector.publisher.lift({ | ||
| swmId: opts.swmId ?? opts.workspaceId ?? 'swm-main', |
There was a problem hiding this comment.
🟡 Issue: --swm-id is stored on the job, but nothing in the async lift path ever reads it when resolving or publishing jobs. As written, this CLI option is a no-op and suggests users can target a distinct shared-memory namespace when they currently cannot. Either plumb it through workspace resolution or drop the option until it has real behavior.
|
|
||
| | Graph | URI | Stores | Notes | | ||
| |---|---|---|---| | ||
| | Jobs graph | `urn:dkg:publisher:control-plane` | `LiftJob` and `LiftRequest` resources plus job-native progress metadata | Internal queue and recovery state only. Not workspace/shared state. | |
There was a problem hiding this comment.
🟡 Issue: The jobs graph URI is configurable via AsyncLiftPublisherConfig.graphUri, so documenting urn:dkg:publisher:control-plane here as a fixed URI will drift from non-default deployments. Mark this as the default URI or mention the override so the diagram matches runtime behavior.
| let batch: PublishQuad[] = []; | ||
|
|
||
| for (const entityQuads of byEntity.values()) { | ||
| const entityChunks = splitOversizedEntities |
There was a problem hiding this comment.
🔴 Bug: This helper treats q.subject as the unit of batching and can even split that unit across multiple chunks. share()/sharedMemoryWrite() do replace-by-root writes, so a root plus its /.well-known/genid/... descendants, or one oversized root subject, will be sent as multiple share operations and the later chunk will delete the earlier chunk. Please batch by canonical root entity and never emit multiple shared-memory writes for the same root.
| walletIds: publisherWallets.wallets.map((wallet) => wallet.address), | ||
| pollIntervalMs: args.pollIntervalMs, | ||
| errorBackoffMs: args.errorBackoffMs, | ||
| hasIncludedRecoveryResolver: false, |
There was a problem hiding this comment.
🔴 Bug: The runtime is created without any chain recovery/finality resolver, but async publish can still leave jobs in broadcast/included states. After a crash, recover() will blindly reset broadcast jobs to accepted (risking duplicate on-chain publishes), and leftover included jobs will make runner.start() fail on restart. This needs a real confirmation/recovery path before the daemon can safely enable async publishing.
| export { WorkspaceHandler, WorkspaceHandler as SharedMemoryHandler } from './workspace-handler.js'; | ||
| export { UpdateHandler } from './update-handler.js'; | ||
| export { ChainEventPoller, type ChainEventPollerConfig, type OnContextGraphCreated, type OnParanetCreated, type OnCollectionUpdated, type OnAllowListUpdated, type OnProfileEvent, type CursorPersistence } from './chain-event-poller.js'; | ||
| export { ChainEventPoller, type ChainEventPollerConfig, type OnParanetCreated } from './chain-event-poller.js'; |
There was a problem hiding this comment.
🔴 Bug: This also drops OnContextGraphCreated, OnCollectionUpdated, OnAllowListUpdated, OnProfileEvent, and CursorPersistence from the package root while those types still exist in chain-event-poller.ts. That is a source-compatible break for consumers importing them from @origintrail-official/dkg-publisher; please continue re-exporting them or version this as a breaking change.
| async function acquireLock(lockPath: string) { | ||
| for (let attempt = 0; attempt < 40; attempt += 1) { | ||
| try { | ||
| return await open(lockPath, 'wx', 0o600); |
There was a problem hiding this comment.
🟡 Issue: open(..., 'wx') plus deleting the lock file in finally leaves a permanent stale lock after any crash or forced kill. Once that happens, every later wallet add/remove call will just time out until someone manually deletes .lock. Consider advisory locking or storing PID/timestamp metadata so stale locks can be reaped safely.
Resolved conflicts in daemon.ts: - Import: kept both validation imports (#108) and contextGraphSharedMemoryUri (#97) - SWM write response: adopted richer response format from #97 (shareOperationId, contextGraphId, graph URI, triplesWritten) while keeping #108's subGraphName and localOnly validation Made-with: Cursor
… corruption The publisher CLI commands (enqueue, jobs, job, stats) were opening separate in-memory store connections to the same persistence file that the daemon was using. Since OxigraphStore is in-memory with file-based flush, concurrent access caused race conditions where the daemon's flush would overwrite CLI-written data (and vice versa), making publisher jobs disappear. Fix: add publisher API endpoints to the daemon (/api/publisher/*) and update CLI commands to route through the daemon's HTTP API when it is running, falling back to direct store access when the daemon is not available. This ensures a single consistent view of the publisher job queue. Fixes publisher-cli-smoke.test.ts which was failing on v10-rc since the async publisher (PR #97) was merged. Made-with: Cursor
Resolved conflicts in daemon.ts: - Import: kept both validation imports (#108) and contextGraphSharedMemoryUri (#97) - SWM write response: adopted richer response format from #97 (shareOperationId, contextGraphId, graph URI, triplesWritten) while keeping #108's subGraphName and localOnly validation Made-with: Cursor
… corruption The publisher CLI commands (enqueue, jobs, job, stats) were opening separate in-memory store connections to the same persistence file that the daemon was using. Since OxigraphStore is in-memory with file-based flush, concurrent access caused race conditions where the daemon's flush would overwrite CLI-written data (and vice versa), making publisher jobs disappear. Fix: add publisher API endpoints to the daemon (/api/publisher/*) and update CLI commands to route through the daemon's HTTP API when it is running, falling back to direct store access when the daemon is not available. This ensures a single consistent view of the publisher job queue. Fixes publisher-cli-smoke.test.ts which was failing on v10-rc since the async publisher (PR #97) was merged. Made-with: Cursor
…NFT, CSS, cross-contract Implements the V10 migration spec (dkgv10-spec PR OriginTrail#97) on top of PR OriginTrail#231's `Commit 1/7` CSS state additions. Takes the V10 stack from a partial CSS rewrite to a fully-migrated, self-consistent architecture. CSS is the canonical V10 stake store; `StakingStorage` becomes a frozen V8 archive + TRAC vault; V8 `Staking` and `DelegatorsInfo` are on the chopping block (unregistered at cutover). All stake reads that matter to reward accounting go through `ConvictionStakingStorage`. Commit 2 — StakingV10 + RandomSampling rewrites * Drop V8 Staking / DelegatorsInfo wiring (D3/D13/D17). V10-native `_prepareForStakeChangeV10` replaces the V8 cross-call. * `claim` walks the unclaimed window via D6 retroactive `migrationEpoch` and compounds rewards into `raw` through `cs.increaseRaw` + `cs.addCumulativeRewardsClaimed` (D19 — no separate rewards bucket). * All V10 stake writes go to CSS (`nodeStakeV10`, `totalStakeV10`, D15). `StakingStorage.nodes[id].stake` is not written by V10. * `RandomSampling.calculateNodeScore` reads `nodeStakeV10` from CSS (mandatory-migration model: no V8-only nodes post-cutover, so CSS is the canonical source). `submitProof` denominator likewise uses CSS `getNodeEffectiveStakeAtEpoch` and drops the now-redundant `nodeV10BaseStake` subtraction (D4). Commit 4 — D21 ephemeral NFTs + D23 CSS primitive * New CSS primitive `createNewPositionFromExisting(oldTokenId, newTokenId, newIdentityId, newLockEpochs, newMultiplier18)` atomically replaces a live position at a fresh tokenId while preserving `cumulativeRewardsClaimed`, `lastClaimedEpoch`, and `migrationEpoch`. Emits `PositionReplaced`. * `StakingV10.relock(oldTokenId, newTokenId, newLockEpochs)` and `redelegate(oldTokenId, newTokenId, newIdentityId)` rewritten to use the D23 primitive. * `DKGStakingConvictionNFT.relock` / `redelegate` mint `newTokenId` before the forward, burn `oldTokenId` after. Both return `newTokenId` so callers can track the continuation. Commit 5 — D7 + D8 + D11 migration primitives * Split `convertToNFT` into `selfConvertToNFT` and `adminConvertToNFT` (dual-path D7); shared `_convertToNFT` worker. * D8 — `_convertToNFT` absorbs BOTH V8 `stakeBase` and pending withdrawal amounts into the new V10 position's `raw`. V8 drain subtracts only `stakeBase` from node/total stake (pending was already off-stake at V8 request time). * NFT wrapper: new `selfMigrateV8`, `adminMigrateV8`, `adminMigrateV8Batch` (D11 batched rescue), and `finalizeMigrationBatch` (DAO closer — sets `v10LaunchEpoch`). Admin gate via new `onlyOwnerOrMultiSigOwner` modifier. * New `ConvertedFromV8` event shape: `(delegator, tokenId, identityId, stakeBaseAbsorbed, pendingAbsorbed, lockEpochs, isAdmin)`. Commit 6 — D13 / D18 cross-contract redirects + deploy scripts * `Profile.sol` — drops `DelegatorsInfo`, reads `isOperatorFeeClaimedForEpoch` from CSS. * `StakingKPI.sol` — drops `DelegatorsInfo`; `isNodeDelegator` guard dropped (redundant under V8-archive semantics); fee-claim flag + net-node rewards read from CSS. * `DKGStakingConvictionNFT.sol` — unused `DelegatorsInfo` import/ state removed. * Deploy scripts 021 / 054 / 055 — annotated. 055's dependency list trimmed to match V10 `initialize()`. 054 joined the `v10` tag. * Hub naming decision: `StakingV10` stays registered as `StakingV10`, NOT aliased to `Staking`. Rationale documented in 055: V10 staking is gated by `onlyConvictionNFT`, so aliasing would make V8-era integrations silently call gated V10 and fail opaquely; keeping slots distinct makes the break loud. Commit 7 — D14 + NatSpec * `WITHDRAWAL_DELAY = 0` on both `StakingV10` and the NFT wrapper. Conviction lock expiry IS the delay gate; a second address-timer on top is redundant. * Top-of-file NatSpec on `DKGStakingConvictionNFT` rewritten to match the final entry-point set and document D21/D23 burn-mint semantics. Known follow-up: * Test suite is red — signatures changed on `relock`/`redelegate`, `convertToNFT` renamed, event shapes shifted. Triage is the next task before PR review. * Live-chain cutover (script 998 — `Hub.removeContractByAddress` for V8 `Staking` + `DelegatorsInfo`) is ops-coordinated and not scripted in this diff. Refs: * Spec: OriginTrail/dkgv10-spec#97 * Stacked on: OriginTrail#231 Made-with: Cursor
…bstoned Fixes the 72 test regressions introduced by the V10 (PR OriginTrail#97) contract stack in 837449e. Leaves the pre-existing 66 TDD/audit-coverage red tests (8d204be "make new tests") untouched — those are intentional red markers for a separate audit pass, not V10 regressions. == parameters.json == - development Chronos.epochLength: 3600 → 2592000 (30 days). `_computeExpiryEpoch` uses wall-clock tier durations (D20 — 30/90/ 180/360 days); with a 1-hour dev epoch the computed expiryEpoch would land ~730 epochs past mint, diverging from the mainnet model the suite is meant to validate. Dev now mirrors mainnet. == unit — V10-native, updated in place == - ConvictionStakingStorage.test.ts (65 passing): * version bump 1.2.0 → 2.0.0 * createPosition: added migrationEpoch param (D6) * D19 — removed rewards-bucket split; increaseRewards / decrease- Rewards / rewardsPortion gone, all reward flow compounds into `raw` at claim time * D20 — lock=0 expiryEpoch is 0 (rest state), not currentEpoch - DKGStakingConvictionNFT.test.ts (96 passing): * All stake reads redirected SS → CSS (D15) * D21 ephemeral semantics: relock + redelegate now burn+mint, tests capture `newTokenId` from staticCall and assert old-token burn * D14 — WITHDRAWAL_DELAY=0: no time advance between create/finalize * D19 — createWithdrawal pre-expiry reverts `LockStillActive` (no sidecar to drain); withdrawal tests refactored for single raw bucket * V8 migration: renamed to selfMigrateV8 / selfConvertToNFT / adminConvertToNFT; D3 — removed DelegatorsInfo preconditions; event signatures updated (ConvertedFromV8 adds isAdmin + pending) * claim/transfer tests: `pos.raw` compounds, `cumulativeRewardsClaimed` tracks lifetime total (D19) - RandomSampling.test.ts (33 passing): * stakingStorage() → convictionStakingStorage() (D15); fixture no longer depends on StakingStorage directly - EpochStorage.test.ts (26 passing): * Fixture now deploys Chronos + reads epochLength() at runtime; all time.increase(3600) replaced with dynamic `epochSeconds` to track the 30-day dev epoch == integration — V10-native, updated in place == - v10-conviction.test.ts (5 passing): * WITHDRAWAL_DELAY_SECONDS → 0 (D14) * createConviction / withdrawal / claim assertions read V10 aggregates from CSS (D15) * claim test verifies `pos.raw` compounding (D19) * selfMigrateV8 replaces convertToNFT; asserts CSS totalStakeV10 / nodeStakeV10 grow on migration (D15) - v10-reward-flywheel.test.ts: `describe.skip` with tombstone. All 3 tests mix V8 stake + V10 stake on the same node. Post-PR97 user directive is V10-only: `calculateNodeScore` reads V10 stake, V8 stake on a V10 node earns 0, and SS no longer mirrors V10 aggregates (D15). Mixed-mode scoring is no longer a valid scenario — migration is mandatory. == integration — V8 flows tombstoned with rationale == Skipped with `describe.skip` / `it.skip` and explicit tombstone comments pointing at the V10 decision (D3/D15/D18 + user directive "there will only be V10 nodes") that invalidates each scenario, plus the V10 equivalent covering the same ground: - RandomSampling.test.ts — entire file, scoring covered by unit suite + v10-conviction end-to-end - Staking.test.ts — Full complex scenario + Delegator Scoring (Suites 1-5), V8 rolling-rewards assertions collapse against V10-only scoring - StakingRewards.test.ts — rewards / Claim order / Proportional / Withdrawal request / Operator fee withdrawal / Migration tests, all driving V8 Staking.stake → claimDelegatorRewards - Profile.test.ts — 7 operator-fee lifecycle / validation tests that build on buildInitialRewardsState (V8 rewards fixture); access-control + When-in-Epoch-1 tests stay active == result == 1027 passing · 108 pending (tombstones) · 66 failing All 66 remaining failures predate the V10 contract work (8d204be "make new tests" — audit-TDD red scaffolding for E-#, SPEC-GAP, INTENTIONAL RED, Migrator integration suites), not introduced or touched by this stack. Made-with: Cursor
Summary
Changes
Test Plan
pnpm test)pnpm build)dkg start(if applicable)Related Issues