feat: agent identity, access control, CLI invite flow, SSE notifications#195
feat: agent identity, access control, CLI invite flow, SSE notifications#195branarakic merged 9 commits intov10-rcfrom
Conversation
…, key restore - Document custodial agent key restoration on restart (§4.3) - Document join-approval broadcast to all peers (§4.4) - Document SSE events for join/sync notifications (§4.5) - Add new API routes to the routes table (register, history, events) Made-with: Cursor
…venance Lifecycle provenance landed. Event-sourced PROV-O model improvements and remaining fixes included in PR #195.
Adds dkg:memoryLayer triple to _meta at assertion create/promote, and updates the PROTOCOL_SYNC handler to exclude WM-only assertions from peer sync. Will be superseded by event-sourced lifecycle model. Made-with: Cursor
…on & VM publish tests The assertion facade used peerId instead of the wallet address when constructing graph URIs, causing promote to return 0 triples for any assertion created via import-file. Fixed by resolving the default agent address from the chain wallet. Also adds: - SSE real-time event stream (/api/events) replacing 30s polling for notifications and project list updates in the UI - JOIN_APPROVED + PROJECT_SYNCED events in the event bus, with daemon listeners that broadcast to SSE clients and insert notifications - Clickable join-request/join-approved notifications that navigate to the project page - Private key restoration on restart for custodial agents loaded from the triple store (fixes sign-join failures after non-clean restarts) - Broadcast join-approval to all peers (not just registry matches) - wmSparql fix: remove default-graph arm that leaked system triples into WM entity counts for non-participants - Comprehensive devnet-test-sharing.sh (16 sections, ~100 checks) covering WM isolation, SWM promotion, late joiner sync, VM publish, the promote address fix, and clearAfter semantics Made-with: Cursor
…le, re-create guard - Add type/format validation to /api/context-graph/register route (isValidContextGraphId, type checks for revealOnChain and accessPolicy) - Validate agentAddress query param in /api/assertion/:name/history to prevent SPARQL injection via crafted URI components - Thread subGraphName through assertionLifecycleUri and all lifecycle metadata generators so assertions in different sub-graphs get distinct lifecycle records instead of colliding on the same _meta subject - Guard assertionCreate against re-create: clear stale lifecycle entity and its prov:Activity events before inserting fresh 'created' triples, preventing nondeterministic history after create/discard/create cycles - Accept subGraphName query param in the history endpoint Made-with: Cursor
…, key restore - Document custodial agent key restoration on restart (§4.3) - Document join-approval broadcast to all peers (§4.4) - Document SSE events for join/sync notifications (§4.5) - Add new API routes to the routes table (register, history, events) Made-with: Cursor
cd09206 to
355c595
Compare
| // EventSource can't set headers — accept token as query param for SSE endpoints | ||
| const url = new URL(req.url ?? '/', `http://${req.headers.host}`); | ||
| const qsToken = url.searchParams.get('token'); | ||
| if (qsToken && verifyToken(qsToken, validTokens)) return true; |
There was a problem hiding this comment.
🔴 Bug: httpAuthGuard() now accepts ?token= on every authenticated route, not just /api/events. That puts bearer credentials into URLs, browser history, logs, and referrers across the whole API surface. Restrict query-string auth to the SSE endpoint (or use a short-lived SSE-specific token) and keep the rest header-only.
| await this.autoRegisterDefaultAgent(); | ||
| } | ||
| if (!this.defaultAgentAddress && this.localAgents.size > 0) { | ||
| this.defaultAgentAddress = this.localAgents.values().next().value!.agentAddress; |
There was a problem hiding this comment.
🟡 Issue: this picks the "default" agent from the first SPARQL binding returned at startup, but result ordering is undefined. On nodes with multiple registered agents, the node-level token and approval routing can start pointing at a different agent after restart. Persist an explicit default-agent marker and restore from that instead of relying on iteration order.
| if (payload.type === 'join-approved') { | ||
| const { contextGraphId, agentAddress: approvedAddr } = payload; | ||
| if (contextGraphId) { | ||
| const localAddr = await this.getDefaultAgentAddress(); |
There was a problem hiding this comment.
🔴 Bug: join approvals are matched only against getDefaultAgentAddress(). If this node hosts more than one registered agent, approvals for any non-default agent will be dropped and that agent never auto-subscribes. Match against the full local agent registry (for example localAgents.has(approvedAddr)) instead of only the default address.
| return new TextEncoder().encode(JSON.stringify({ ok: false, error: 'missing fields' })); | ||
| } | ||
| // Only store if this node owns the CG | ||
| const exists = await this.contextGraphExists(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: contextGraphExists() is not a curator check. Any peer that has already synced the project will return true here, store the join request locally, and ACK success, so the actual curator never receives the request. Check ownership/curator identity before persisting, and have non-curators reject/forward instead.
| { subject: agentUri, predicate: SCHEMA_NAME, object: `"${escapeSparqlLiteral(record.name)}"`, graph }, | ||
| { subject: agentUri, predicate: `${DKG}agentAddress`, object: `"${record.agentAddress}"`, graph }, | ||
| { subject: agentUri, predicate: `${DKG}agentMode`, object: `"${record.mode}"`, graph }, | ||
| { subject: agentUri, predicate: `${DKG}agentAuthToken`, object: `"${record.authToken}"`, graph }, |
There was a problem hiding this comment.
🔴 Bug: this persists bearer tokens into a queryable RDF graph. Because /api/query can read arbitrary graphs, any authenticated caller can fetch did:dkg:system/agents and steal other agents' auth tokens. Keep the token index in a non-queryable secret store, or store only a one-way hash here.
| // Restore private key for custodial agents from operational wallet keys | ||
| // (private keys are intentionally not persisted to the triple store) | ||
| if (record.mode === 'custodial' && !record.privateKey) { | ||
| const opKeys = this.config.chainConfig?.operationalKeys; |
There was a problem hiding this comment.
🔴 Bug: non-owner custodial agents lose signing capability after restart. registerAgent() creates random private keys, but loadAgentsFromStore() only reconstructs keys that happen to match chainConfig.operationalKeys, so every other custodial agent comes back without privateKey and signJoinRequest() will fail. Persist encrypted custodial keys (or a recoverable keystore reference) instead of inferring them from node wallets.
| }); | ||
| } | ||
| // Auto-include creator's agent address | ||
| if (this.defaultAgentAddress) { |
There was a problem hiding this comment.
🔴 Bug: curated project creation auto-adds defaultAgentAddress, not the agent that actually made the request. On a multi-agent node, creating a project with a non-default agent token can invite the wrong address and leave the real creator off the allowlist. Thread the caller's agentAddress into createContextGraph() and use it for creator/curator/allowlist triples.
| return jsonResponse(res, 400, { error: 'agentAddress is required' }); | ||
| } | ||
| try { | ||
| await agent.inviteAgentToContextGraph(contextGraphId, agentAddress); |
There was a problem hiding this comment.
🔴 Bug: this mutating route ignores the caller identity that was already resolved in requestAgentAddress. inviteAgentToContextGraph() authorizes against defaultAgentAddress, so any local agent token can add/remove/approve participants for a project owned by the node's default agent. Pass the caller's agent address through and compare ownership against that actor, not the node default.
…aller identity threading - Restrict query-param token auth to /api/events endpoint only (prevents credential leakage in URLs/logs across the full API surface) - Match join-approved notifications against full localAgents registry, not just defaultAgentAddress (multi-agent node support) - Replace contextGraphExists() with getContextGraphOwner() curator check in join request handler (non-curators now reject instead of ACK) - Thread callerAgentAddress through createContextGraph, invite, remove, and approve routes so ownership checks use the actual caller identity rather than the node's default agent Made-with: Cursor
| }); | ||
| await this.store.insert(lifecycleQuads); | ||
|
|
||
| const metaGraph = contextGraphMetaUri(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: metaGraph is already declared at line 1873 in the same scope, so this second declaration is a syntax error and the package will not compile. Reuse the existing variable or rename one of them.
|
|
||
| this.localAgents.set(record.agentAddress, record); | ||
| this.agentTokenIndex.set(record.authToken, record.agentAddress); | ||
| await this.persistAgentToStore(record); |
There was a problem hiding this comment.
🔴 Bug: generated custodial agents are only persisted via RDF here, but loadAgentsFromStore() can recover a private key only from chainConfig.operationalKeys. Any custodial agent created with generateCustodialAgent() loses its signing key after restart, so join signing and other agent-signed flows break permanently. Persist the key in an encrypted keystore, or block custodial registration until recovery exists.
| nodeName: this.config.name, | ||
| }); | ||
| this.log.info(ctx, `On-chain profile created, identityId=${identityId}`); | ||
| } else if (identityId === 0n) { |
There was a problem hiding this comment.
🔴 Bug: this lets edge nodes run without an on-chain identity, but private sync auth later still hard-requires requesterIdentityId > 0. An invited edge node will subscribe and then get every curated/private sync request rejected. Either sign sync requests with the agent key or keep a fallback auth path for identity-less edge nodes.
| return this.subscribedContextGraphs.has(contextGraphId) | ||
| || (this.config.syncContextGraphs ?? []).includes(contextGraphId); | ||
| // Check if any local agent address is in the participants list | ||
| const myAgentAddress = this.defaultAgentAddress; |
There was a problem hiding this comment.
🔴 Bug: read authorization is now evaluated against this.defaultAgentAddress, not the caller resolved from the bearer token. On a multi-agent node, any agent token inherits the default owner's read permissions for curated graphs. Thread the request agent address into canReadContextGraph()/getDisallowedGraphPrefixes() instead of using node-global state.
| throw new Error(`Context graph "${opts.id}" already exists`); | ||
| } | ||
|
|
||
| const isCurated = opts.accessPolicy === 1 || (opts.allowedAgents && opts.allowedAgents.length > 0); |
There was a problem hiding this comment.
🔴 Bug: allowedPeers no longer makes a graph curated here. With the new private-read path only checking allowedAgents, a legacy peer-allowlisted project will still be created/broadcast as public and its sync gate will stop enforcing the old allowlist. Include allowedPeers in the curated/private path until migration is complete.
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_ACCESS_POLICY, object: `"${opts.accessPolicy === 1 || opts.private ? 'private' : 'public'}"`, graph: ontologyGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.RDF_TYPE, object: DKG_ONTOLOGY.DKG_PARANET, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.SCHEMA_NAME, object: `"${opts.name}"`, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_CREATOR, object: `did:dkg:agent:${this.peerId}`, graph: defGraph }, |
There was a problem hiding this comment.
🔴 Bug: the creator is still recorded as did:dkg:agent:${this.peerId} instead of the bearer-token agent (callerAgentAddress). That breaks per-agent ownership and attributes provenance to the transport peer ID instead of the agent. Write the caller agent DID here (and in the provenance activity) so agent-scoped ACLs can work.
| `Wait for sync to complete or create it locally first.`, | ||
| ); | ||
| } | ||
| const callerIsOwner = (callerDid && owner === callerDid) || owner === selfDid || owner === selfAgentDid; |
There was a problem hiding this comment.
🔴 Bug: this authorizes against node-wide identities (selfDid/selfAgentDid), so any local agent on the same node can manage participants for a project owned by the default agent. Compare only against the resolved caller DID; the same issue is duplicated in removeAgentFromContextGraph() below.
| /** | ||
| * Reject a pending join request. | ||
| */ | ||
| async rejectJoinRequest(contextGraphId: string, agentAddress: string): Promise<void> { |
There was a problem hiding this comment.
🔴 Bug: rejectJoinRequest() has no curator/owner authorization at all. Any authenticated caller on a node that has the pending request in its store can flip it to rejected. Apply the same owner validation as approve/invite and pass the caller agent through from the API route.
| } | ||
| agent.verifyJoinRequest(contextGraphId, agentAddress, timestamp, signature); | ||
|
|
||
| const isCurator = await agent.contextGraphExists(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: contextGraphExists() only tells you the project is present locally, not that this node is the curator. On any non-curator node that has already synced/subscribed the graph, this branch stores the join request locally and returns success instead of forwarding it to the curator. Check ownership/curator status here before deciding to keep the request local.
- Add 7 new CLI commands under `context-graph`: add-agent, remove-agent, agents, request-join, approve-join, reject-join, join-requests - Auto-prefix bare slugs with agent address in `context-graph create` (e.g. "my-research" -> "0xAbC.../my-research") - Add --access-policy and --allowed-agent options to create command - Add ApiClient methods for all invite-flow endpoints - Deprecate old `context-graph invite --peer` in favor of `add-agent` - Remove random timestamp suffix from UI CreateProjectModal CG IDs Made-with: Cursor
| }); | ||
| await this.store.insert(lifecycleQuads); | ||
|
|
||
| const metaGraph = contextGraphMetaUri(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: metaGraph is already declared a few lines above in this same scope. Redeclaring it here makes the file fail to parse/build. Reuse the existing variable instead of introducing a second const.
| authToken: strip(row['token']), | ||
| createdAt: strip(row['createdAt']) || '', | ||
| }; | ||
| // Restore private key for custodial agents from operational wallet keys |
There was a problem hiding this comment.
🔴 Bug: custodial agents generated by registerAgent() never have their private key persisted, and on restart this restoration path only checks operationalKeys. That means any non-owner custodial agent comes back without signing ability while its auth token still works. Persist the generated key securely (or refuse to reload the agent) so restart doesn't permanently break join/sign flows.
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_ACCESS_POLICY, object: `"${opts.accessPolicy === 1 || opts.private ? 'private' : 'public'}"`, graph: ontologyGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.RDF_TYPE, object: DKG_ONTOLOGY.DKG_PARANET, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.SCHEMA_NAME, object: `"${opts.name}"`, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_CREATOR, object: `did:dkg:agent:${this.peerId}`, graph: defGraph }, |
There was a problem hiding this comment.
🔴 Bug: this still writes dkg:creator as the node peer ID even when callerAgentAddress is available. Later ownership checks read dkg:creator, so any local agent on the same node can manage a graph created by another agent, and remote nodes never see the promised agent-level owner. Write the creator triple with the caller/default agent address here, or switch the auth checks to dkg:curator.
| .action(async (contextGraphId: string) => { | ||
| try { | ||
| const client = await ApiClient.connect(); | ||
| const result = await client.signJoinRequest(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: signJoinRequest() only signs locally; it does not deliver the request to the curator. This command prints "Join request sent" without ever calling the /request-join endpoint, so CLI users can never actually submit a join request. Add the follow-up API call that posts the signed payload.
| } | ||
| agent.verifyJoinRequest(contextGraphId, agentAddress, timestamp, signature); | ||
|
|
||
| const isCurator = await agent.contextGraphExists(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: contextGraphExists() only tells us this node has a local copy of the graph, not that it is the curator. Any non-curator member with synced metadata will take this branch, store the request in its own _meta, and never forward it to the actual curator. Gate the local path on an ownership/curator check instead of existence.
| // likely denied access (curated CG, not on allowlist). Remove the | ||
| // subscription so the project doesn't appear in the UI as a ghost. | ||
| if (result.dataSynced === 0 && result.syncCapablePeers > 0) { | ||
| const exists = await agent.contextGraphExists(paranetId); |
There was a problem hiding this comment.
🔴 Bug: this treats "0 data synced + no local definition" as an authorization failure, but catch-up sync never pulls the ONTOLOGY definition for empty public projects. Joining an empty open project from an invite code will therefore be marked denied even though access succeeded. Base denial on an explicit remote auth failure (or synced private metadata), not on the absence of data.
|
|
||
| const cgId = `cg:${slugify(trimmedName)}-${Date.now().toString(36)}`; | ||
| const finalSlug = slugify(trimmedName); | ||
| const resolvedAddr = agentAddress ?? '0x0000000000000000000000000000000000000000'; |
There was a problem hiding this comment.
🔴 Bug: if fetchCurrentAgent() has not completed yet, this falls back to the zero address and creates a permanent context-graph ID under 0x000…. Because the backend only auto-prefixes bare slugs, that bad ID will not be corrected later. Disable creation until agentAddress is loaded, or send only the slug and let the server namespace it.
…persistence - Store SHA-256 hash of agent auth tokens in RDF instead of plaintext so SPARQL queries never reveal bearer credentials. Raw tokens are kept in a permission-restricted keystore file for boot-time recovery. Existing plaintext tokens are auto-migrated on first load. - HTTP /request-join handler now uses isCuratorOf() instead of contextGraphExists(), preventing non-curator nodes from silently swallowing join requests intended for the actual project owner. - Ownership checks in inviteAgentToContextGraph / removeAgentFromContextGraph no longer fall back to node-level identity when an explicit caller agent address is provided. This prevents non-owner agents on a multi-agent node from piggybacking on the default agent's ownership. - Custodial agent private keys are now persisted to an encrypted-at-rest keystore file (agent-keystore.json, mode 0600) so non-operational-wallet custodial agents survive node restarts. - Default agent selection is now deterministic: an explicit isDefaultAgent marker is persisted in the triple store and restored on boot, removing reliance on undefined SPARQL result ordering. - Fix pre-existing duplicate metaGraph declaration in dkg-publisher.ts. Made-with: Cursor
| this.log.warn(ctx, `Cannot resolve identityId for approver ${a.approverAddress} — skipping`); | ||
| continue; | ||
| } | ||
| resolvedSignatures.push({ identityId: id, r: a.signatureR, vs: a.signatureVS }); |
There was a problem hiding this comment.
🔴 Bug: Unresolved approvals are now still appended with identityId = 0n. The length check below no longer filters them out, and chain.verify() forwards those IDs directly on-chain. Keep skipping unresolved signers or fail fast before constructing resolvedSignatures.
| .action(async (contextGraphId: string) => { | ||
| try { | ||
| const client = await ApiClient.connect(); | ||
| const result = await client.signJoinRequest(contextGraphId); |
There was a problem hiding this comment.
🔴 Bug: This only signs the join request locally. The CLI never POSTs the signed payload to /request-join, so it prints success even though no curator receives anything. Add the submit step here (like the UI does) or wrap both calls in a dedicated client method.
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_ACCESS_POLICY, object: `"${opts.accessPolicy === 1 || opts.private ? 'private' : 'public'}"`, graph: ontologyGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.RDF_TYPE, object: DKG_ONTOLOGY.DKG_PARANET, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.SCHEMA_NAME, object: `"${opts.name}"`, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_CREATOR, object: `did:dkg:agent:${this.peerId}`, graph: defGraph }, |
There was a problem hiding this comment.
🔴 Bug: New graphs still persist dkg:creator as did:dkg:agent:{peerId} even when the request was authenticated as another local agent. Later owner checks compare against callerAgentAddress, so non-default agents on the same node cannot manage the projects they created. Store the caller agent DID here instead of the transport peer ID.
| throw new Error(`Context graph "${opts.id}" already exists`); | ||
| } | ||
|
|
||
| const isCurated = opts.accessPolicy === 1 || (opts.allowedAgents && opts.allowedAgents.length > 0); |
There was a problem hiding this comment.
🔴 Bug: allowedPeers still has a supported CLI/API path, but isCurated no longer considers it. A graph created with --invite/allowedPeers will be stored as public and advertised through ONTOLOGY, so the legacy allowlist only gates gossip writes, not discovery/sync. Include allowedPeers?.length in the curated check until that path is removed.
| // the sync protocol enforces access on the remote side regardless. | ||
| const localAllowed = await agent.getContextGraphAllowedAgents(paranetId).catch(() => [] as string[]); | ||
| if (localAllowed.length > 0) { | ||
| const myAddr = agent.getDefaultAgentAddress(); |
There was a problem hiding this comment.
🔴 Bug: This early allowlist check uses the node's default agent instead of the agent resolved from the request token. On multi-agent nodes, a valid non-default invitee will get a false 403 whenever the default agent is not on the allowlist. Compare against requestAgentAddress here.
|
|
||
| const cgId = `cg:${slugify(trimmedName)}-${Date.now().toString(36)}`; | ||
| const finalSlug = slugify(trimmedName); | ||
| const resolvedAddr = agentAddress ?? '0x0000000000000000000000000000000000000000'; |
There was a problem hiding this comment.
🔴 Bug: If the agent identity request has not completed yet, this permanently namespaces the project under the zero address. Different nodes can collide on the same context-graph ID, and curator/participant checks no longer match the real agent. Block creation until agentAddress is loaded, or let the server expand a bare slug.
… forward via P2P
- Auto-include creator in allowlist when accessPolicy=1, even without
explicit allowedAgents (previously only added when allowedAgents was
non-empty, leaving curated CGs with empty allowlists)
- Wire /api/context-graph/{id}/sign-join to call forwardJoinRequest()
after signing (previously only signed without sending via P2P)
- Add devnet-test-cli-invite.sh: 21-check end-to-end test covering CLI
and API invite flows, cross-checks, and edge cases
Made-with: Cursor
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_ACCESS_POLICY, object: `"${opts.accessPolicy === 1 || opts.private ? 'private' : 'public'}"`, graph: ontologyGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.RDF_TYPE, object: DKG_ONTOLOGY.DKG_PARANET, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.SCHEMA_NAME, object: `"${opts.name}"`, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_CREATOR, object: `did:dkg:agent:${this.peerId}`, graph: defGraph }, |
There was a problem hiding this comment.
🔴 Bug: dkg:creator is still persisted as the node peer ID even when the request was authenticated as a specific agent. The new ownership checks (getContextGraphOwner / assertCallerIsOwner / registerContextGraph) read dkg:creator, not dkg:curator, so a context graph created by a non-default agent becomes unmanageable by its actual creator. Persist opts.callerAgentAddress here (and in the provenance association below) so ownership stays agent-scoped.
| get assertion() { | ||
| const agent = this; | ||
| const agentAddress = this.peerId; | ||
| const agentAddress = this.defaultAgentAddress ?? this.peerId; |
There was a problem hiding this comment.
🔴 Bug: This accessor still closes over defaultAgentAddress, so /api/assertion/* requests authenticated as any other registered agent will create/promote/discard/history under the wrong DID. The caller's resolved agent address needs to be threaded into these operations instead of using a single process-wide default.
| try { | ||
| const { agentAddress } = JSON.parse(body); | ||
| if (!agentAddress) return jsonResponse(res, 400, { error: 'Missing agentAddress' }); | ||
| await agent.rejectJoinRequest(contextGraphId, agentAddress); |
There was a problem hiding this comment.
🔴 Bug: approveJoinRequest() enforces creator ownership, but rejectJoinRequest() is called here with no caller context and the callee does no authorization. Any authenticated token on the node can reject another context graph's pending requests. Pass requestAgentAddress through and apply the same owner check before mutating state.
| if (rawAgentAddress && !/^[\w:.\-]+$/.test(rawAgentAddress)) { | ||
| return jsonResponse(res, 400, { error: "Invalid agentAddress format" }); | ||
| } | ||
| const subGraphName = qs.get("subGraphName") ?? undefined; |
There was a problem hiding this comment.
🔴 Bug: subGraphName is forwarded without validateOptionalSubGraphName(), unlike the other assertion routes. agent.assertion.history() interpolates it into a SPARQL IRI, so a crafted query param can break the query or inject invalid syntax. Validate it here before building the history request.
| kcUal?: string; | ||
| } | ||
|
|
||
| export interface AssertionDescriptor { |
There was a problem hiding this comment.
🔴 Bug: This replaces the exported AssertionDescriptor shape instead of extending it. Consumers still expecting createdAt / promotedAt / publishedAt (including packages/core/test/memory-model.test.ts) will stop type-checking, and /api/assertion/*/history also changes shape without a compatibility path. Keep the old fields or introduce a new versioned descriptor.
| const row = entityResult.bindings[0]; | ||
| const stateStr = strip(row['state']) as AssertionState; | ||
| const layerStr = strip(row['memoryLayer']); | ||
| const graphUri = row['assertionGraph'] ?? contextGraphAssertionUri(contextGraphId, addr, name); |
There was a problem hiding this comment.
🟡 Issue: The new subGraphName support falls back to contextGraphAssertionUri(contextGraphId, addr, name) when dkg:assertionGraph is missing, so history for legacy sub-graph assertions reports the root assertion graph instead of the sub-graph. Include opts?.subGraphName in this fallback URI.
- agent.test.ts: add missing getPrivateContextGraphParticipants mock for authorizeSyncRequest test; subscribe CG for buildSyncRequest pipe-delimited test so hasLocalData is true - e2e-sub-graphs.test.ts: accept both 'confirmed' and 'tentative' publish status (edge nodes without on-chain registration correctly return tentative) - e2e-privacy.test.ts: fix type mismatch (bigint vs string) in participant assertions; move identity setup before agent creation for late-join test - dkg-agent.ts canReadContextGraph: restructure to check identity ID against participants, fall back to subscription for edge nodes with no on-chain identity (0n), deny access when node has identity but is not in participants - dkg-agent.ts sync handler: fix data-phase SPARQL filter to include the base data graph URI (not just sub-graphs with trailing /) Made-with: Cursor
| ? registerSelfSovereignAgent(name, opts.publicKey, opts.framework) | ||
| : generateCustodialAgent(name, opts?.framework); | ||
|
|
||
| this.localAgents.set(record.agentAddress, record); |
There was a problem hiding this comment.
🔴 Bug: Registration only rejects duplicate names. If the same public key/address is registered again under a different name, this overwrites localAgents in memory but persists another set of RDF/keystore entries and another auth token for the same agent, which makes reloads ambiguous. Reject duplicate agentAddress (or publicKey) before persisting.
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_ACCESS_POLICY, object: `"${opts.accessPolicy === 1 || opts.private ? 'private' : 'public'}"`, graph: ontologyGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.RDF_TYPE, object: DKG_ONTOLOGY.DKG_PARANET, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.SCHEMA_NAME, object: `"${opts.name}"`, graph: defGraph }, | ||
| { subject: paranetUri, predicate: DKG_ONTOLOGY.DKG_CREATOR, object: `did:dkg:agent:${this.peerId}`, graph: defGraph }, |
There was a problem hiding this comment.
🔴 Bug: dkg:creator is still being written as did:dkg:agent:${this.peerId} even when the request resolved to a different local agent via callerAgentAddress. Later owner checks read dkg:creator, so a context graph created by a non-default agent cannot manage participants or register on-chain afterward. Persist the caller agent DID here instead of the node peer DID.
| try { | ||
| const { agentAddress } = JSON.parse(body); | ||
| if (!agentAddress) return jsonResponse(res, 400, { error: 'Missing agentAddress' }); | ||
| await agent.rejectJoinRequest(contextGraphId, agentAddress); |
There was a problem hiding this comment.
🔴 Bug: Only the approve path threads requestAgentAddress into an ownership check. This reject path, and the sibling join-requests read path above it, never verify that the caller is the curator, so any authenticated client can inspect or reject pending requests for arbitrary projects. Apply the same curator check used by approveJoinRequest before exposing or mutating join-request state.
|
|
||
| const result = await createContextGraph(cgId, trimmedName, description.trim() || undefined); | ||
| const opts = access === 'curated' | ||
| ? { accessPolicy: 1, allowedAgents: agentAddress ? [agentAddress] : [] } |
There was a problem hiding this comment.
🔴 Bug: fetchCurrentAgent() falls back to the node peer ID when no EVM agent is registered, but this code unconditionally sends that value in allowedAgents. The daemon validates allowedAgents as 0x... addresses, so the default curated-project flow fails on fresh/edge nodes. Only send allowedAgents when the current identity is a valid Ethereum address, or block creation until an agent is registered.
| // Non-fatal | ||
| } | ||
|
|
||
| await submitJoinRequest(cgId, { ...signed, agentName }); |
There was a problem hiding this comment.
🟡 Issue: signJoinRequest() already hits /sign-join, and that endpoint both signs and forwards the request. Posting the returned payload to /request-join here forwards the same join request a second time, which doubles network traffic and creates duplicate pending-request writes/logs. Either make /sign-join pure signing or remove this second submission.
- Skip unresolved approvals (identityId=0n) before forwarding to on-chain verify to prevent contract reverts - Use callerAgentAddress as CG creator instead of node peerId so non-default agents can manage projects they create - Include legacy allowedPeers in isCurated check so peer-allowlisted CGs are treated as private and hidden from ONTOLOGY - Use requestAgentAddress (from bearer token) instead of getDefaultAgentAddress in daemon subscribe allowlist check - Block CG creation in UI until agent identity is loaded to prevent zero-address namespace collisions Made-with: Cursor
Pulls in two significant PRs that landed on v10-rc since the last sync: - PR #193 "feat: persistent assertion lifecycle provenance across memory layers" — durable dkg:Assertion lifecycle record in the CG's _meta graph tracking created → promoted → published → finalized (or discarded) with timestamps, op IDs, root entities, KC UAL refs. Adds GET /api/assertion/:name/history. Crucially does NOT touch resolveViewGraphs or the underlying graph URIs — the WM/SWM/VM fan-out our slot-backed recall depends on is unchanged. - PR #195 "feat: agent identity, access control, CLI invite flow, SSE notifications" — multi-agent-per-node identity model with Bearer-token resolution. Adds POST /api/agent/register, GET /api/agent/identity, POST /api/context-graph/register, POST /api/context-graph/invite, GET /api/events (SSE stream). Modifies POST /api/context-graph/create with new body fields (allowedAgents, accessPolicy, private, register). Single-token auth still works via backward-compat fallback to defaultAgentAddress. Full multi-agent plumbing on the adapter side is tracked as Phase 2 follow-up in issue #201. Merge resolution: - Git auto-merged daemon.ts and node-ui/ui/api.ts cleanly (non- overlapping diff regions). Zero manual conflict resolution. - Caught one stacked-conflict aftermath: POST /api/context-graph/register ended up with THREE handler blocks (L4409, L4479, L4525) from the auto-merge. Only the first was reachable; the other two were dead code but each encoded a different error contract. Independently flagged by qa-engineer and skill-md-auditor in review. Resolution: kept the L4409 handler as canonical (richest error classification: 409 already-registered, 404 not-found, 503 no-known- creator, 403 only-creator, 500 default, all with explanatory hints). Salvaged the `typeof id !== 'string'` input guard from L4479 and added a conditional `...(result.txHash ? { txHash: result.txHash } : {})` to the 200 response so we don't drop the txHash field that the deleted variants were exposing. Deleted both duplicate blocks. SKILL.md drift: The merge left SKILL.md at the exact 0f9950e state (v10-rc didn't touch the file). Adds surgical +22-line patch documenting the new v10-rc agent-facing routes, distributed across existing sections per the project's single-file SKILL.md design decision (spec issue #79 comment via PR #108): - §4 Authentication: drop stale "planned multi-agent" note, add Bearer-token resolution language, document POST /api/agent/register + GET /api/agent/identity - §5 Memory Model: add GET /api/assertion/:name/history route bullet and a "Lifecycle provenance" blockquote explaining the new _meta audit trail - §6 Context Graphs: expand /create body fields (allowedAgents, accessPolicy, private, register), add /register and /invite routes - §8 Node Administration: add GET /api/events SSE row Preserved verbatim (intentional, per team-lead decision): - §3 Turn Context Override — our dual-contract (routing authority AND UI-selection-state semantics) stays. v10-rc didn't touch this section. - §5 "Making memories recallable" paragraph — the permissive slot- backed recall contract from 0f9950e stays. Agents need to know the slot exists and how it matches. Tests, post-merge + post-cleanup: - packages/adapter-openclaw: 222/222 ✓ (baseline preserved) - packages/cli/test/daemon-openclaw.test.ts: 58/58 ✓ (baseline preserved) - packages/node-ui: 495/495 ✓ (baseline preserved) - packages/cli (full): 528 pass, 29 pre-existing Windows symlink/permission flakes in migration/rollback/slot-helpers/publisher-wallets/auto-update/ blue-green that date back to PR #168 live validation — not merge regressions, pass on Linux CI Impact on slot-backed recall (verified by memory-architect): - resolveViewGraphs unchanged byte-for-byte (git diff 0f9950e HEAD -- packages/query/src/dkg-query-engine.ts returns empty) - PR #193 assertion lifecycle records write exclusively to the _meta graph (`contextGraphMetaUri`), which is filtered out of every prefix scan by `DKGQueryEngine.discoverGraphsByPrefix` at line 227 (`!g.includes('/_meta') && !g.includes('/staging/')`). Our 6-query permissive SPARQL fan-out will NOT pick up dkg:Assertion state enum literals ("promoted", "published", "finalized") as noise — they live exclusively in graphs that our queries cannot see. - Chat-turn persistence path (ChatMemoryManager.storeChatExchange) still writes through the createAssertion-then-writeAssertion pattern it already had; no lifecycle bootstrap gate on writes in publisher.assertionWrite. Phase 2 follow-ups filed: - #201 — thread multi-agent identity through DkgDaemonClient + memory slot recall (full multi-agent plumbing on the adapter side) Reviewed by memory-architect (GREEN on slot-backed recall safety), skill-md-auditor (patch plan applied verbatim), and qa-engineer (RED on the triplicate handler, now resolved). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: multi-agent ownership and access control polish
- Skip unresolved ACK approvals (identityId=0n) before on-chain verify
to prevent contract reverts; track filtered signer addresses so
verified-memory metadata only includes actual verify() participants
- Store callerAgentAddress as CG creator/curator DID so non-default
agents on multi-agent nodes own the graphs they create; on-chain ops
still use the node wallet (documented limitation until per-agent signers)
- Add isCallerOrNodeOwner() helper: accepts the specific caller DID when
provided, falls back to peerId/defaultAgentAddress for node-level tokens;
no longer iterates all localAgents (prevents cross-agent escalation)
- Thread callerAgentAddress through registerContextGraph, inviteToContextGraph,
approveCclPolicy, revokeCclPolicy and their daemon routes
- Use persisted owner DID from getContextGraphOwner() for CCL metadata
(approvedBy/revokedBy/creator) so gossip-publish-handler accepts bindings
- Include legacy allowedPeers in isCurated check so peer-allowlisted CGs
are correctly treated as private
- Daemon subscribe allowlist: only reject when callerAddr is a real
Ethereum address (skip peerId fallbacks to avoid false 403s)
- UI: block CG creation until agent identity loads; add retry/error state
for transient /api/agent/identity failures
Made-with: Cursor
* fix(agent): decouple CREATOR from CURATOR and cover non-default-agent path
The previous revision of this PR also rewrote `dkg:creator` to the caller's
wallet DID, which broke `resolveCuratorPeerId()` — that helper relies on
`DKG_CREATOR` being a libp2p peer ID so meta-refresh can dial the curator
deterministically. The e2e-privacy regression (`B refreshes meta from
curator when C (late invite) requests sync`) surfaced the break.
Changes:
- Restore `DKG_CREATOR = did:dkg:agent:${peerId}` in `createContextGraph()`.
The caller's wallet identity now lives only in `DKG_CURATOR`.
- Switch `getContextGraphOwner()` to prefer `DKG_CURATOR` (wallet-scoped)
with `DKG_CREATOR` as a fallback for legacy CGs. Ownership checks keep
working for per-agent auth while peer resolution stays intact.
- Update the two existing paranet-owner tests: they shared a store (which
forces a shared `defaultAgentAddress` that maps every local identity to
the default), so the "non-owner" case now passes an explicit
`callerAgentAddress` to prove non-owner wallets are rejected.
- Add a new regression test: a non-default agent owns a CG, and neither
the node's default-agent token nor a sibling wallet can approve/revoke
its policies — only the owning caller wallet can (closes Codex 🟡 on
missing coverage for the non-default-agent path).
All 301 agent vitest tests pass; `pnpm tsc --noEmit` is clean across
`packages/agent`, `packages/cli`, and `packages/node-ui`.
Made-with: Cursor
* fix(agent): align policy-binding owner DID with public DKG_CREATOR
Codex round-3 flagged a real regression introduced by the previous commit:
approve/revoke policy bindings were emitting `dkg:approvedBy` /
`dkg:revokedBy` with the CURATOR wallet DID, but remote peers resolve
the paranet owner through `DKG_CREATOR` (peer ID DID) in ONTOLOGY gossip
— they never see `DKG_CURATOR` because it lives in `_meta`. The
`gossip-publish-handler` validation would therefore reject every approve
and revoke for a public CG until (and unless) the `_meta` graph happened
to sync.
Split the two concerns:
- `getContextGraphOwner()` keeps preferring `DKG_CURATOR` for local
authorization (so per-agent owner checks still work).
- New `getContextGraphCreator()` reads only `DKG_CREATOR` (peer DID);
`approveCclPolicy` and `revokeCclPolicy` now emit this as the binding
owner so remote peers can validate via ONTOLOGY alone.
All 301 agent vitest tests still pass, including e2e-privacy and
gossip-validation.
Made-with: Cursor
* fix: round-4 — gossip owner lookup, daemon 403 mapping, wider test coverage
Three findings from Codex on the previous push:
1. 🔴 `GossipPublishHandler` used `getContextGraphOwner()` for validating
incoming policy `approvedBy`/`revokedBy`. Since that now prefers the
curator wallet DID (used for local auth) and approvals are emitted
with the creator peer DID (for public propagation), invited peers
would reject every valid approval once they synced `_meta`. Switch
the handler's owner callback to `getContextGraphCreator()` so the
comparison stays on peer-DID axes end-to-end.
2. 🔴 `/api/ccl/policy/approve` and `/api/ccl/policy/revoke` now surface
owner-check failures (because `callerAgentAddress` is threaded through
from the API token). Map the "Only the paranet owner can manage
policies" error to a 403 instead of falling through to the top-level
500 handler — mirrors the existing 403 handling on `/register` and
`/invite`.
3. 🟡 The new regression test claimed register+invite coverage but only
exercised approve/revoke. Expanded it to also drive
`inviteToContextGraph` through the three paths (default token
rejected, sibling wallet rejected, owning caller accepted) and
revokeCclPolicy is now exercised with all three callers too.
`registerContextGraph` shares the same `assertCallerIsOwner` code
path as invite (noted in the test comment) so the existing coverage
protects it.
All 301 agent tests pass, including e2e-privacy and gossip-validation.
`pnpm tsc --noEmit` clean across agent/cli/node-ui.
Made-with: Cursor
---------
Co-authored-by: Branimir Rakic <aleatoric@Branimirs-MacBook-Pro.local>
Summary
Comprehensive agent identity and access control for V10 context graphs, including CLI and UI flows for project invitations, real-time notifications, and memory isolation.
Core Changes
assertion.promote()now uses wallet address instead of peer ID, matchingimport-filebehavioraccessPolicy=1) now auto-include the creator even without explicitallowedAgentsgetContextGraphOwner()instead of justcontextGraphExists()createContextGraph,inviteAgent,removeAgent, andapproveJoinall use the caller's resolved agent address (from token) rather than the node defaultlocalAgentsregistry, not justdefaultAgentAddressmemoryLayertracking for WM assertions in_metagraphCLI Invite Flow (new)
7 new commands under
dkg context-graph:add-agent,remove-agent,agents— manage allowlistsrequest-join,approve-join,reject-join,join-requests— full join flowmy-project→0xAbC.../my-project)--access-policyand--allowed-agentoptions oncreateinvite --peerin favor ofadd-agent --agentSSE Notifications
GET /api/events) for real-time UI updatesjoin_request,join_approved,project_syncedevent typesuseNodeEventsReact hook for UI integrationUI Improvements
wmSparql, preventing phantom fact countsCreateProjectModalTests
devnet-test-sharing.sh: 16-section test covering WM isolation, SWM sync, late joiner, promote, publish, clearAfterdevnet-test-cli-invite.sh: 21-check end-to-end test for CLI + API invite flows, cross-checks, edge casesSpec Updates
SPEC_V10_IDENTITY_AND_ACCESS.mdwith join-approval broadcast, SSE events, key restorationTest plan
devnet-test-cli-invite.sh— 21/21 passed (CLI create, join, approve, add/remove agent, API flow, cross-checks, edge cases)devnet-test-sharing.sh— WM isolation, SWM sync, VM publishtsc --noEmit) for agent, cli, core packages