feat: viewer UI redesign with real-time streaming#10
Conversation
Redesigned the agentmemory viewer with a newsprint-inspired design system and added real-time live updates via iii-engine StreamModule. Viewer UI: - Newsprint design with Playfair Display / Lora / Inter typography - Dashboard with semantic memory, procedural memory, consolidation status - Knowledge graph with curved edges, node shapes, legend, search, tooltips - Visual vertical timeline with alternating cards and type filter chips - Activity tab with GitHub-style heatmap, type breakdown, activity feed - Profile, Sessions, Audit, Memories tabs Real-time streaming: - Fixed stream event propagation (engine uses exact group_id matching) - Added viewer broadcast group for cross-session live updates - Switched from blocking trigger to fire-and-forget triggerVoid - Viewer subscribes to 'viewer' group, handles create/update events New files: - iii-config.yaml: engine config with StreamModule, PubSub, Otel - src/viewer/server.ts: standalone viewer HTTP server on port 3113 API additions: - GET /agentmemory/memories endpoint with ?latest=true filter - Viewer file resolution with multiple candidate paths
📝 WalkthroughWalkthroughAdds a standalone viewer HTTP server with many /agentmemory/* APIs, broadcasts observations to a new viewer stream group, extends STREAM schema with viewerGroup, integrates viewer startup/shutdown, and provides runtime configuration in iii-config.yaml. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant SDK
participant ViewerServer as Viewer Server
participant KV as State KV
participant Metrics as Metrics Store
Client->>SDK: trigger stream::set (session group)
activate SDK
SDK->>KV: persist observation
KV-->>SDK: ack
SDK->>SDK: triggerVoid stream::set (viewer group)
SDK-->>ViewerServer: (broadcast) observation event
deactivate SDK
ViewerServer->>KV: update graph/profile derived data
KV-->>ViewerServer: updated aggregates
ViewerServer->>Metrics: optional metrics read/write
Metrics-->>ViewerServer: metrics response
Client->>ViewerServer: GET /agentmemory/graph/stats
activate ViewerServer
ViewerServer->>KV: read graph nodes/edges
KV-->>ViewerServer: graph data
ViewerServer-->>Client: 200 Graph stats
deactivate ViewerServer
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
src/index.ts (1)
228-240: Close the viewer server in shutdown for graceful termination.
startViewerServer(...)return value is ignored, so active viewer connections are not drained beforeprocess.exit.♻️ Suggested refactor
const viewerPort = config.restPort + 2; - startViewerServer(viewerPort, kv, sdk, secret, metricsStore, provider); + const viewerServer = startViewerServer( + viewerPort, + kv, + sdk, + secret, + metricsStore, + provider, + ); const shutdown = async () => { console.log(`\n[agentmemory] Shutting down...`); + await new Promise<void>((resolve) => viewerServer.close(() => resolve())); healthMonitor.stop(); dedupMap.stop(); indexPersistence.stop();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 228 - 240, The viewer server started by startViewerServer(viewerPort, kv, sdk, secret, metricsStore, provider) is never captured or closed, so active connections aren't drained; update the code to store the returned server instance (e.g., viewerServer) from startViewerServer and in the shutdown async function call its graceful shutdown method (e.g., await viewerServer.close() or await viewerServer.stop()) before calling process.exit(0), ensuring you handle/rethrow or log errors from that close call consistently with other shutdown steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@iii-config.yaml`:
- Around line 15-17: Replace the permissive wildcard in cors.allowed_origins
with an explicit allowlist of trusted origins (e.g., local dev and production
viewer origins) and load them from a config/env variable rather than using
["*"]; update iii-config.yaml's cors.allowed_origins to reference that allowlist
(keeping cors.allowed_methods as-is) so only approved origins can call mutating
endpoints.
In `@src/triggers/api.ts`:
- Around line 1009-1013: The current return object for the missing viewer case
uses status_code: 200 which incorrectly signals success; change the returned
status_code to 404 in the block that returns { status_code: 200, headers, body:
"<!DOCTYPE html>...viewer not found..." } so the response correctly indicates
"Not Found" while leaving headers and body content intact (or adjust body to a
short 404 message if desired).
In `@src/viewer/server.ts`:
- Around line 370-390: The route handling in the HTTP server only matches
pathname "/" and "/viewer" so requests to "/agentmemory/viewer" fall through;
update the request-path check in the handler that uses variables method,
pathname and candidates (the block that reads files with readFileSync and
responds via res.writeHead/res.end) to also accept "/agentmemory/viewer" (or use
a startsWith check if you want a prefix match) so the same index.html candidates
are served for that path before falling through to API routing.
- Around line 693-707: In the "session/end" request handler, validate
body.sessionId before calling kv.get/kv.set and before returning json(res, 200):
ensure body.sessionId exists and is a string (e.g., typeof body.sessionId ===
"string" and not empty); if validation fails, return an appropriate error
response (e.g., json(res, 400, { success: false, error: "invalid sessionId" }))
so the code in the try block that uses KV.sessions, kv.get, and kv.set only runs
for a valid sessionId and you don't incorrectly return success for invalid
input.
- Around line 247-253: The graph rebuild currently writes new node/edge entries
but never removes stale ones; modify the block around the nodes/edges write
(where nodeMap, nodes, edges and kv.set are used for KV.graphNodes and
KV.graphEdges) to first remove existing keys for those namespaces before writing
the new set — either call a provided namespace clear method (e.g.,
kv.clear(KV.graphNodes)/kv.clear(KV.graphEdges)) or list keys in
KV.graphNodes/KV.graphEdges and delete each with kv.delete(key), then write the
new nodes and edges as currently implemented.
---
Nitpick comments:
In `@src/index.ts`:
- Around line 228-240: The viewer server started by
startViewerServer(viewerPort, kv, sdk, secret, metricsStore, provider) is never
captured or closed, so active connections aren't drained; update the code to
store the returned server instance (e.g., viewerServer) from startViewerServer
and in the shutdown async function call its graceful shutdown method (e.g.,
await viewerServer.close() or await viewerServer.stop()) before calling
process.exit(0), ensuring you handle/rethrow or log errors from that close call
consistently with other shutdown steps.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
iii-config.yamlsrc/functions/compress.tssrc/functions/observe.tssrc/index.tssrc/state/schema.tssrc/triggers/api.tssrc/viewer/index.htmlsrc/viewer/server.ts
| cors: | ||
| allowed_origins: ["*"] | ||
| allowed_methods: [GET, POST, PUT, DELETE, OPTIONS] |
There was a problem hiding this comment.
Avoid wildcard CORS on the API module.
allowed_origins: ["*"] is too permissive for endpoints that mutate state; prefer explicit local viewer origins.
🔒 Suggested hardening
cors:
- allowed_origins: ["*"]
+ allowed_origins:
+ - "http://localhost:3113"
+ - "http://127.0.0.1:3113"
allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cors: | |
| allowed_origins: ["*"] | |
| allowed_methods: [GET, POST, PUT, DELETE, OPTIONS] | |
| cors: | |
| allowed_origins: | |
| - "http://localhost:3113" | |
| - "http://127.0.0.1:3113" | |
| allowed_methods: [GET, POST, PUT, DELETE, OPTIONS] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@iii-config.yaml` around lines 15 - 17, Replace the permissive wildcard in
cors.allowed_origins with an explicit allowlist of trusted origins (e.g., local
dev and production viewer origins) and load them from a config/env variable
rather than using ["*"]; update iii-config.yaml's cors.allowed_origins to
reference that allowlist (keeping cors.allowed_methods as-is) so only approved
origins can call mutating endpoints.
- Restrict CORS allowed_origins to local dev origins instead of wildcard - Return 404 instead of 200 when viewer HTML file is not found - Add /agentmemory/viewer path to standalone viewer server routing - Validate sessionId in session/end handler before KV operations - Clear stale graph nodes/edges before writing rebuilt data - Capture viewer server and close gracefully on shutdown
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/triggers/api.ts (1)
980-983: Consider pagination forGET /agentmemory/memories.Line 980 currently loads and returns the full
KV.memoriesscope, which can grow unbounded and degrade latency/payload size. Addlimit/offset(or cursor) support and applylatestfiltering after bounded fetch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triggers/api.ts` around lines 980 - 983, The handler for GET /agentmemory/memories currently calls kv.list(KV.memories) and returns everything; change it to accept pagination parameters (e.g., query params "limit" and "offset" or "cursor") from req.query_params, parse and validate them (apply a sensible max cap), call kv.list(KV.memories, { limit, cursor/offset }) to fetch a bounded page, then apply the existing latest filtering (the latest ? memories.filter((m) => m.isLatest) : memories) to the page results and return the page plus pagination metadata (e.g., nextCursor or total/offset) in the response; reference kv.list, KV.memories, req.query_params, latest, and filtered when making the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/viewer/server.ts`:
- Around line 29-33: The CORS config (constant CORS in server.ts) is too
permissive—replace the wildcard origin with an explicit allow-list (e.g., read
allowed origin(s) from an env var like VIEWER_ORIGIN or VIEWER_ALLOWED_ORIGINS
and validate the request Origin against that list in your request handler) and
return the matching Origin value in "Access-Control-Allow-Origin" instead of
"*"; also add the "Vary: Origin" header to the response so caches respect the
Origin-specific response. Locate the CORS constant and the request/response
handling around it (server.ts) and implement origin validation + dynamic header
setting; ensure headers still include methods/headers and consider adding
Access-Control-Allow-Credentials if cookies/auth are used.
- Around line 189-190: The current code writes memNode.properties.memoryId =
mem.id and memNode.properties.memoryType = mem.type which blindly overwrites
prior metadata when ensureNode (used to dedupe by type:name) maps multiple
memories to the same node; change these properties to support multiple values
and merge instead of overwrite: use memoryIds (array) and memoryTypes (array or
set) on memNode.properties, push mem.id / mem.type only if not already present,
and update any consumers to read the pluralized fields (memoryIds/memoryTypes)
or maintain backward-compatible single-value fallback handling in ensureNode /
memNode creation paths.
---
Nitpick comments:
In `@src/triggers/api.ts`:
- Around line 980-983: The handler for GET /agentmemory/memories currently calls
kv.list(KV.memories) and returns everything; change it to accept pagination
parameters (e.g., query params "limit" and "offset" or "cursor") from
req.query_params, parse and validate them (apply a sensible max cap), call
kv.list(KV.memories, { limit, cursor/offset }) to fetch a bounded page, then
apply the existing latest filtering (the latest ? memories.filter((m) =>
m.isLatest) : memories) to the page results and return the page plus pagination
metadata (e.g., nextCursor or total/offset) in the response; reference kv.list,
KV.memories, req.query_params, latest, and filtered when making the changes.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
iii-config.yamlsrc/index.tssrc/triggers/api.tssrc/viewer/server.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/index.ts
- iii-config.yaml
Replace static CORS wildcard with dynamic corsHeaders(req) that validates the request Origin against an allowlist. Pass req to all json() call sites. Change memNode.properties from scalar memoryId/memoryType to array-based memoryIds/memoryTypes to prevent overwrite when multiple memories map to the same graph node.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/viewer/server.ts (4)
590-593: Add radix toparseIntfor clarity.While the default behavior for
"50"is correct, explicitly specifying radix 10 is a best practice to avoid edge cases with leading zeros.- limit: parseInt(params.get("limit") || "50"), + limit: parseInt(params.get("limit") || "50", 10),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/viewer/server.ts` around lines 590 - 593, The parseInt call used when building the sdk.trigger payload should include an explicit radix; update the parseInt invocation in the code that prepares the "limit" field (the line invoking parseInt(params.get("limit") || "50")) to pass 10 as the second argument so it reads parseInt(..., 10), ensuring decimal parsing of params.get("limit") prior to calling sdk.trigger("mem::audit-query").
787-803: Consider extracting body parsing to reduce duplication.The body parsing logic (lines 788-795) is nearly identical to the POST handler (lines 687-694). This is a minor duplication that could be extracted if more methods need body parsing.
♻️ Extract parseJsonBody helper
Add before
handleApiRoute:async function parseJsonBody( req: IncomingMessage, res: ServerResponse, ): Promise<Record<string, unknown> | null> { try { const raw = await readBody(req); return raw.trim() ? JSON.parse(raw) : {}; } catch { json(res, 400, { error: "invalid JSON" }, req); return null; } }Then in handlers:
if (method === "POST") { - let body: Record<string, unknown> = {}; - try { - const raw = await readBody(req); - if (raw.trim()) body = JSON.parse(raw); - } catch { - json(res, 400, { error: "invalid JSON" }, req); - return; - } + const body = await parseJsonBody(req, res); + if (body === null) return;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/viewer/server.ts` around lines 787 - 803, Extract the duplicated JSON parsing into a helper (e.g., parseJsonBody) used by handlers in handleApiRoute: move the try/readBody/JSON.parse + error-response (json(res, 400, { error: "invalid JSON" }, req)) into parseJsonBody(req, res) which returns the parsed Record<string, unknown> or null on error; replace the inline parsing blocks in the DELETE branch (and the existing POST branch) to call parseJsonBody and early-return when it returns null before calling sdk.trigger("mem::governance-delete", body) so behavior and error responses remain identical.
273-288: Sequential awaits in loops could be parallelized.The delete and write operations are executed sequentially, which can be slow with many nodes/edges. Consider using
Promise.allfor better throughput:♻️ Suggested parallelization
const oldNodes = await kv.list<GraphNode>(KV.graphNodes).catch(() => []); - for (const old of oldNodes) { - await kv.delete(KV.graphNodes, old.id); - } + await Promise.all(oldNodes.map((old) => kv.delete(KV.graphNodes, old.id))); const oldEdges = await kv.list<GraphEdge>(KV.graphEdges).catch(() => []); - for (const old of oldEdges) { - await kv.delete(KV.graphEdges, old.id); - } + await Promise.all(oldEdges.map((old) => kv.delete(KV.graphEdges, old.id))); const nodes = [...nodeMap.values()]; - for (const n of nodes) { - await kv.set(KV.graphNodes, n.id, n); - } - for (const e of edges) { - await kv.set(KV.graphEdges, e.id, e); - } + await Promise.all(nodes.map((n) => kv.set(KV.graphNodes, n.id, n))); + await Promise.all(edges.map((e) => kv.set(KV.graphEdges, e.id, e)));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/viewer/server.ts` around lines 273 - 288, The current sequence iterates and awaits each kv.delete/kv.set one-by-one (using oldNodes/oldEdges and nodes/edges), causing slow serial IO; change each loop to collect promises (e.g., map oldNodes -> kv.delete(KV.graphNodes, old.id) and oldEdges -> kv.delete(...), and map nodes/edges -> kv.set(...)) and then await Promise.all on each promise array so deletes and writes run in parallel while preserving the same remove-then-write ordering; update the blocks around kv.list, the for loops over oldNodes/oldEdges, and the subsequent writes using Promise.all to improve throughput.
234-271: Quadratic complexity in function-file relationship inference.This nested loop iterates over all function nodes, then for each checks edges against all file nodes, then iterates over all other function nodes checking edges again. With large graphs this becomes O(funcNodes² × fileNodes × edges).
Consider building a lookup map for faster edge queries:
♻️ Suggested optimization using edge index
+ // Build edge index for O(1) lookups + const nodeEdges = new Map<string, Set<string>>(); + for (const e of edges) { + if (!nodeEdges.has(e.sourceNodeId)) nodeEdges.set(e.sourceNodeId, new Set()); + if (!nodeEdges.has(e.targetNodeId)) nodeEdges.set(e.targetNodeId, new Set()); + nodeEdges.get(e.sourceNodeId)!.add(e.targetNodeId); + nodeEdges.get(e.targetNodeId)!.add(e.sourceNodeId); + } + const fileNodes = [...nodeMap.values()].filter((n) => n.type === "file"); const funcNodes = [...nodeMap.values()].filter((n) => n.type === "function"); + const addedPairs = new Set<string>(); + for (const fn of funcNodes) { + const fnConnections = nodeEdges.get(fn.id) || new Set(); for (const file of fileNodes) { - const hasEdge = edges.some( - (e) => - (e.sourceNodeId === fn.id && e.targetNodeId === file.id) || - (e.sourceNodeId === file.id && e.targetNodeId === fn.id), - ); - if (!hasEdge) continue; + if (!fnConnections.has(file.id)) continue; for (const fn2 of funcNodes) { if (fn2.id === fn.id) continue; - const alsoTouches = edges.some( - (e) => - (e.sourceNodeId === fn2.id && e.targetNodeId === file.id) || - (e.sourceNodeId === file.id && e.targetNodeId === fn2.id), - ); + const fn2Connections = nodeEdges.get(fn2.id) || new Set(); + const alsoTouches = fn2Connections.has(file.id); if (alsoTouches) { - const exists = edges.some( - (e) => - (e.sourceNodeId === fn.id && e.targetNodeId === fn2.id) || - (e.sourceNodeId === fn2.id && e.targetNodeId === fn.id), - ); - if (!exists) { + const pairKey = [fn.id, fn2.id].sort().join(":"); + if (!addedPairs.has(pairKey) && !fnConnections.has(fn2.id)) { + addedPairs.add(pairKey); edges.push({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/viewer/server.ts` around lines 234 - 271, The current triple-nested loops over funcNodes, fileNodes and funcNodes cause O(func² × file × edges) complexity; instead build an index from edges in a single pass (e.g., map fileId -> Set of functionIds and a Set of existing pair keys for quick existence checks) then iterate each file's function set and produce related pairs once; use the same edge shape (id via gid("ge"), type "related_to", weight 0.5, sourceObservationIds: [], createdAt: now) and check the existing-pair set before pushing to edges to avoid duplicates (references: fileNodes, funcNodes, edges, gid, createdAt, "related_to").
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/viewer/server.ts`:
- Around line 211-216: The code assigns string[] to memNode.properties.memoryIds
and memoryTypes but GraphNode.properties is typed as Record<string, string>;
either change the GraphNode interface to accept arrays (e.g., update
GraphNode.properties to Record<string, string | string[]> or a more specific
type that allows memoryIds: string[] and memoryTypes: string[]) or serialize the
arrays when assigning (e.g., set memoryIds = JSON.stringify(ids) and memoryTypes
= JSON.stringify(types)); update all usages of memNode.properties.memoryIds and
memoryTypes accordingly to parse JSON when needed or adjust types so downstream
code compiles.
---
Nitpick comments:
In `@src/viewer/server.ts`:
- Around line 590-593: The parseInt call used when building the sdk.trigger
payload should include an explicit radix; update the parseInt invocation in the
code that prepares the "limit" field (the line invoking
parseInt(params.get("limit") || "50")) to pass 10 as the second argument so it
reads parseInt(..., 10), ensuring decimal parsing of params.get("limit") prior
to calling sdk.trigger("mem::audit-query").
- Around line 787-803: Extract the duplicated JSON parsing into a helper (e.g.,
parseJsonBody) used by handlers in handleApiRoute: move the
try/readBody/JSON.parse + error-response (json(res, 400, { error: "invalid JSON"
}, req)) into parseJsonBody(req, res) which returns the parsed Record<string,
unknown> or null on error; replace the inline parsing blocks in the DELETE
branch (and the existing POST branch) to call parseJsonBody and early-return
when it returns null before calling sdk.trigger("mem::governance-delete", body)
so behavior and error responses remain identical.
- Around line 273-288: The current sequence iterates and awaits each
kv.delete/kv.set one-by-one (using oldNodes/oldEdges and nodes/edges), causing
slow serial IO; change each loop to collect promises (e.g., map oldNodes ->
kv.delete(KV.graphNodes, old.id) and oldEdges -> kv.delete(...), and map
nodes/edges -> kv.set(...)) and then await Promise.all on each promise array so
deletes and writes run in parallel while preserving the same remove-then-write
ordering; update the blocks around kv.list, the for loops over
oldNodes/oldEdges, and the subsequent writes using Promise.all to improve
throughput.
- Around line 234-271: The current triple-nested loops over funcNodes, fileNodes
and funcNodes cause O(func² × file × edges) complexity; instead build an index
from edges in a single pass (e.g., map fileId -> Set of functionIds and a Set of
existing pair keys for quick existence checks) then iterate each file's function
set and produce related pairs once; use the same edge shape (id via gid("ge"),
type "related_to", weight 0.5, sourceObservationIds: [], createdAt: now) and
check the existing-pair set before pushing to edges to avoid duplicates
(references: fileNodes, funcNodes, edges, gid, createdAt, "related_to").
| const ids = (memNode.properties.memoryIds as string[]) || []; | ||
| ids.push(mem.id); | ||
| memNode.properties.memoryIds = ids; | ||
| const types = (memNode.properties.memoryTypes as string[]) || []; | ||
| if (!types.includes(mem.type)) types.push(mem.type); | ||
| memNode.properties.memoryTypes = types; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the GraphNode type definition
fd -t f "\.ts$|\.tsx$" | head -20Repository: rohitg00/agentmemory
Length of output: 573
🏁 Script executed:
# Search for GraphNode definition
rg -t ts "interface GraphNode|type GraphNode" -A 5Repository: rohitg00/agentmemory
Length of output: 238
🏁 Script executed:
# Look at the specific file and lines in context
cat -n src/viewer/server.ts | sed -n '200,280p'Repository: rohitg00/agentmemory
Length of output: 3223
🏁 Script executed:
# Search for usages of memoryIds and memoryTypes
rg -t ts "memoryIds|memoryTypes" -B 2 -A 2Repository: rohitg00/agentmemory
Length of output: 5378
🏁 Script executed:
# Get the complete GraphNode interface
cat -n src/types.ts | head -100Repository: rohitg00/agentmemory
Length of output: 2677
🏁 Script executed:
# Search for all places where memoryIds or memoryTypes are read from properties
rg -t ts "properties\.memory(Ids|Types)" -B 2 -A 2Repository: rohitg00/agentmemory
Length of output: 614
🏁 Script executed:
# Check if there's any code that reads memNode.properties.memoryIds
rg -t ts "memNode\.properties|\.properties\.memory" -B 3 -A 3Repository: rohitg00/agentmemory
Length of output: 718
🏁 Script executed:
# Get the complete GraphNode interface definition
rg -t ts "interface GraphNode" -A 20Repository: rohitg00/agentmemory
Length of output: 688
🏁 Script executed:
# Also check GraphEdge for context
rg -t ts "interface GraphEdge|interface Graph" -A 15Repository: rohitg00/agentmemory
Length of output: 1549
🏁 Script executed:
# Search for any place where the graph data is read back (deserialization)
rg -t ts "kv\.get.*GraphNode|kv\.list.*GraphNode" -B 2 -A 5Repository: rohitg00/agentmemory
Length of output: 5183
🏁 Script executed:
# Check if memoryIds or memoryTypes are ever read from memNode.properties after being set
rg -t ts "memNode" -A 50 src/viewer/server.ts | grep -A 50 "memoryIds\|memoryTypes"Repository: rohitg00/agentmemory
Length of output: 2011
🏁 Script executed:
# Check if there's any code that reads back the graph nodes and accesses these properties
rg -t ts "\.properties\.memory"Repository: rohitg00/agentmemory
Length of output: 367
🏁 Script executed:
# Look at the buildGraphFromData function more completely to see if these are read back
cat -n src/viewer/server.ts | sed -n '180,320p'Repository: rohitg00/agentmemory
Length of output: 5211
🏁 Script executed:
# Search for any code that reads memoryIds or memoryTypes from stored graph nodes
rg -t ts "memoryIds|memoryTypes" -B 5 -A 5 | grep -v "src/viewer/server.ts"Repository: rohitg00/agentmemory
Length of output: 7448
🏁 Script executed:
# Check the entire function to see the full context of the stored nodes
cat -n src/viewer/server.ts | sed -n '93,130p'Repository: rohitg00/agentmemory
Length of output: 1311
🏁 Script executed:
# Check how data is serialized when stored in KV
cat -n src/viewer/server.ts | sed -n '282,290p'Repository: rohitg00/agentmemory
Length of output: 382
🏁 Script executed:
# Check if the KV store does any automatic serialization
rg -t ts "kv\.set|kv\.get" -B 2 -A 2 src/viewer/server.ts | head -40Repository: rohitg00/agentmemory
Length of output: 496
🏁 Script executed:
# Search for any deserialization or usage of these stored properties in tests or elsewhere
rg -t ts "memNode|\.properties" -B 3 -A 3 | grep -C 5 "memoryIds\|memoryTypes"Repository: rohitg00/agentmemory
Length of output: 807
Type mismatch: assigning arrays to Record<string, string> properties.
GraphNode.properties is typed as Record<string, string>, but this code assigns string[] values to memoryIds and memoryTypes. While this works at runtime, it violates the type contract and could cause issues if downstream code expects string values.
Consider either updating the GraphNode interface to allow array values, or serializing the arrays:
Option 1: Serialize to JSON strings
- const ids = (memNode.properties.memoryIds as string[]) || [];
- ids.push(mem.id);
- memNode.properties.memoryIds = ids;
- const types = (memNode.properties.memoryTypes as string[]) || [];
- if (!types.includes(mem.type)) types.push(mem.type);
- memNode.properties.memoryTypes = types;
+ const ids: string[] = memNode.properties.memoryIds
+ ? JSON.parse(memNode.properties.memoryIds)
+ : [];
+ ids.push(mem.id);
+ memNode.properties.memoryIds = JSON.stringify(ids);
+ const types: string[] = memNode.properties.memoryTypes
+ ? JSON.parse(memNode.properties.memoryTypes)
+ : [];
+ if (!types.includes(mem.type)) types.push(mem.type);
+ memNode.properties.memoryTypes = JSON.stringify(types);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/viewer/server.ts` around lines 211 - 216, The code assigns string[] to
memNode.properties.memoryIds and memoryTypes but GraphNode.properties is typed
as Record<string, string>; either change the GraphNode interface to accept
arrays (e.g., update GraphNode.properties to Record<string, string | string[]>
or a more specific type that allows memoryIds: string[] and memoryTypes:
string[]) or serialize the arrays when assigning (e.g., set memoryIds =
JSON.stringify(ids) and memoryTypes = JSON.stringify(types)); update all usages
of memNode.properties.memoryIds and memoryTypes accordingly to parse JSON when
needed or adjust types so downstream code compiles.
Summary
group_idmatching, so added a dedicatedviewerbroadcast groupsrc/viewer/server.ts) on port 3113GET /agentmemory/memoriesendpoint with?latest=truefilterTechnical details
stream::setwithgroup_id: "viewer"broadcasts to all viewer subscribers (fire-and-forget viatriggerVoid)streamName: "mem-live", groupId: "viewer"and handlescreate/updateevent typesiii-config.yamladds StreamModule (port 3112), PubSubModule, OtelModule, ExecModuleTest plan
iii --config iii-config.yamlhttp://localhost:3113/agentmemory/viewerSummary by CodeRabbit
New Features
Bug Fixes
Chores