Skip to content

feat: viewer UI redesign with real-time streaming#10

Merged
rohitg00 merged 3 commits intomainfrom
feat/viewer-ui-live-streaming
Mar 4, 2026
Merged

feat: viewer UI redesign with real-time streaming#10
rohitg00 merged 3 commits intomainfrom
feat/viewer-ui-live-streaming

Conversation

@rohitg00
Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 commented Mar 3, 2026

Summary

  • Redesigned agentmemory viewer with newsprint-inspired design system (Playfair Display/Lora/Inter typography, CSS variables, responsive layout)
  • Added real-time live streaming via iii-engine StreamModule — viewer receives observation events as they happen
  • Fixed stream event propagation: engine uses exact group_id matching, so added a dedicated viewer broadcast group
  • New tabs: Activity (GitHub-style heatmap + feed), enhanced Graph (curved edges, shapes, legend, search), visual Timeline, improved Dashboard (semantic/procedural memory sections)
  • Added standalone viewer HTTP server (src/viewer/server.ts) on port 3113
  • New GET /agentmemory/memories endpoint with ?latest=true filter

Technical details

  • stream::set with group_id: "viewer" broadcasts to all viewer subscribers (fire-and-forget via triggerVoid)
  • Viewer WS subscribes to streamName: "mem-live", groupId: "viewer" and handles create/update event types
  • iii-config.yaml adds StreamModule (port 3112), PubSubModule, OtelModule, ExecModule

Test plan

  • Start engine: iii --config iii-config.yaml
  • Open viewer at http://localhost:3113/agentmemory/viewer
  • Verify WS status shows "live" (not "live updates off")
  • Create an observation via REST API and verify it appears in real-time
  • Check all tabs: Dashboard, Graph, Activity, Timeline, Sessions, Memories, Audit, Profile

Summary by CodeRabbit

  • New Features

    • Added a dedicated viewer server with web UI and rich /agentmemory/* APIs for sessions, memories, graphs, profiles, summaries, and governance.
    • New REST endpoint to list memories.
    • Real-time streaming now emits updates for both per-session and viewer groups; viewer server starts/stops with the app.
  • Bug Fixes

    • Viewer asset resolution now tries multiple paths and returns clearer not-found responses.
  • Chores

    • Added configuration for viewer, telemetry, queuing, cron, streams, and tracing.

Redesigned the agentmemory viewer with a newsprint-inspired design system
and added real-time live updates via iii-engine StreamModule.

Viewer UI:
- Newsprint design with Playfair Display / Lora / Inter typography
- Dashboard with semantic memory, procedural memory, consolidation status
- Knowledge graph with curved edges, node shapes, legend, search, tooltips
- Visual vertical timeline with alternating cards and type filter chips
- Activity tab with GitHub-style heatmap, type breakdown, activity feed
- Profile, Sessions, Audit, Memories tabs

Real-time streaming:
- Fixed stream event propagation (engine uses exact group_id matching)
- Added viewer broadcast group for cross-session live updates
- Switched from blocking trigger to fire-and-forget triggerVoid
- Viewer subscribes to 'viewer' group, handles create/update events

New files:
- iii-config.yaml: engine config with StreamModule, PubSub, Otel
- src/viewer/server.ts: standalone viewer HTTP server on port 3113

API additions:
- GET /agentmemory/memories endpoint with ?latest=true filter
- Viewer file resolution with multiple candidate paths
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 3, 2026

📝 Walkthrough

Walkthrough

Adds a standalone viewer HTTP server with many /agentmemory/* APIs, broadcasts observations to a new viewer stream group, extends STREAM schema with viewerGroup, integrates viewer startup/shutdown, and provides runtime configuration in iii-config.yaml.

Changes

Cohort / File(s) Summary
Configuration & Infrastructure
iii-config.yaml
Adds runtime/module configuration: file-based StateModule, RestApiModule (127.0.0.1:3111, CORS), QueueModule, PubSubModule, KvCronAdapter, StreamModule (127.0.0.1:3112), OtelModule (memory exporter), ExecModule (watch/exec).
Viewer Server
src/viewer/server.ts
New startViewerServer(...) implementing a dedicated viewer HTTP server with many /agentmemory/* endpoints (sessions, memories, observations, graph, profile, audit, search, summarize, governance, metrics), auth helpers, KV-backed graph/profile builders, SDK fallbacks, and lifecycle management.
Application Init / Shutdown
src/index.ts
Imports and starts viewer server on restPort + 2; increases readiness endpoint count; ensures viewerServer.close() is awaited during shutdown.
Stream Broadcasting
src/functions/compress.ts, src/functions/observe.ts
Switches stream triggers to sdk.triggerVoid("stream::set", ...) and emits observations to both the session group and the new STREAM.viewerGroup (sessionId included in viewer payload).
Stream Schema & API Endpoints
src/state/schema.ts, src/triggers/api.ts
Adds viewerGroup: "viewer" to STREAM; new API api::memories (GET /agentmemory/memories with optional latest filter); viewer static-file resolution improved to try multiple candidate paths and return 404 when not found.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant SDK
    participant ViewerServer as Viewer Server
    participant KV as State KV
    participant Metrics as Metrics Store

    Client->>SDK: trigger stream::set (session group)
    activate SDK
    SDK->>KV: persist observation
    KV-->>SDK: ack
    SDK->>SDK: triggerVoid stream::set (viewer group)
    SDK-->>ViewerServer: (broadcast) observation event
    deactivate SDK

    ViewerServer->>KV: update graph/profile derived data
    KV-->>ViewerServer: updated aggregates
    ViewerServer->>Metrics: optional metrics read/write
    Metrics-->>ViewerServer: metrics response

    Client->>ViewerServer: GET /agentmemory/graph/stats
    activate ViewerServer
    ViewerServer->>KV: read graph nodes/edges
    KV-->>ViewerServer: graph data
    ViewerServer-->>Client: 200 Graph stats
    deactivate ViewerServer
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I nibbled code and found a door,
A viewer server, streams galore.
Memories hop from KV beds,
Graphs and profiles in my head.
Come peek inside — the rabbit said, "Explore!"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: viewer UI redesign with real-time streaming' accurately captures the main changes: viewer UI redesign and real-time streaming integration via StreamModule with WebSocket support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/viewer-ui-live-streaming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (1)
src/index.ts (1)

228-240: Close the viewer server in shutdown for graceful termination.

startViewerServer(...) return value is ignored, so active viewer connections are not drained before process.exit.

♻️ Suggested refactor
   const viewerPort = config.restPort + 2;
-  startViewerServer(viewerPort, kv, sdk, secret, metricsStore, provider);
+  const viewerServer = startViewerServer(
+    viewerPort,
+    kv,
+    sdk,
+    secret,
+    metricsStore,
+    provider,
+  );

   const shutdown = async () => {
     console.log(`\n[agentmemory] Shutting down...`);
+    await new Promise<void>((resolve) => viewerServer.close(() => resolve()));
     healthMonitor.stop();
     dedupMap.stop();
     indexPersistence.stop();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 228 - 240, The viewer server started by
startViewerServer(viewerPort, kv, sdk, secret, metricsStore, provider) is never
captured or closed, so active connections aren't drained; update the code to
store the returned server instance (e.g., viewerServer) from startViewerServer
and in the shutdown async function call its graceful shutdown method (e.g.,
await viewerServer.close() or await viewerServer.stop()) before calling
process.exit(0), ensuring you handle/rethrow or log errors from that close call
consistently with other shutdown steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@iii-config.yaml`:
- Around line 15-17: Replace the permissive wildcard in cors.allowed_origins
with an explicit allowlist of trusted origins (e.g., local dev and production
viewer origins) and load them from a config/env variable rather than using
["*"]; update iii-config.yaml's cors.allowed_origins to reference that allowlist
(keeping cors.allowed_methods as-is) so only approved origins can call mutating
endpoints.

In `@src/triggers/api.ts`:
- Around line 1009-1013: The current return object for the missing viewer case
uses status_code: 200 which incorrectly signals success; change the returned
status_code to 404 in the block that returns { status_code: 200, headers, body:
"<!DOCTYPE html>...viewer not found..." } so the response correctly indicates
"Not Found" while leaving headers and body content intact (or adjust body to a
short 404 message if desired).

In `@src/viewer/server.ts`:
- Around line 370-390: The route handling in the HTTP server only matches
pathname "/" and "/viewer" so requests to "/agentmemory/viewer" fall through;
update the request-path check in the handler that uses variables method,
pathname and candidates (the block that reads files with readFileSync and
responds via res.writeHead/res.end) to also accept "/agentmemory/viewer" (or use
a startsWith check if you want a prefix match) so the same index.html candidates
are served for that path before falling through to API routing.
- Around line 693-707: In the "session/end" request handler, validate
body.sessionId before calling kv.get/kv.set and before returning json(res, 200):
ensure body.sessionId exists and is a string (e.g., typeof body.sessionId ===
"string" and not empty); if validation fails, return an appropriate error
response (e.g., json(res, 400, { success: false, error: "invalid sessionId" }))
so the code in the try block that uses KV.sessions, kv.get, and kv.set only runs
for a valid sessionId and you don't incorrectly return success for invalid
input.
- Around line 247-253: The graph rebuild currently writes new node/edge entries
but never removes stale ones; modify the block around the nodes/edges write
(where nodeMap, nodes, edges and kv.set are used for KV.graphNodes and
KV.graphEdges) to first remove existing keys for those namespaces before writing
the new set — either call a provided namespace clear method (e.g.,
kv.clear(KV.graphNodes)/kv.clear(KV.graphEdges)) or list keys in
KV.graphNodes/KV.graphEdges and delete each with kv.delete(key), then write the
new nodes and edges as currently implemented.

---

Nitpick comments:
In `@src/index.ts`:
- Around line 228-240: The viewer server started by
startViewerServer(viewerPort, kv, sdk, secret, metricsStore, provider) is never
captured or closed, so active connections aren't drained; update the code to
store the returned server instance (e.g., viewerServer) from startViewerServer
and in the shutdown async function call its graceful shutdown method (e.g.,
await viewerServer.close() or await viewerServer.stop()) before calling
process.exit(0), ensuring you handle/rethrow or log errors from that close call
consistently with other shutdown steps.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7099056 and 8cd0c66.

📒 Files selected for processing (8)
  • iii-config.yaml
  • src/functions/compress.ts
  • src/functions/observe.ts
  • src/index.ts
  • src/state/schema.ts
  • src/triggers/api.ts
  • src/viewer/index.html
  • src/viewer/server.ts

Comment thread iii-config.yaml
Comment on lines +15 to +17
cors:
allowed_origins: ["*"]
allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid wildcard CORS on the API module.

allowed_origins: ["*"] is too permissive for endpoints that mutate state; prefer explicit local viewer origins.

🔒 Suggested hardening
       cors:
-        allowed_origins: ["*"]
+        allowed_origins:
+          - "http://localhost:3113"
+          - "http://127.0.0.1:3113"
         allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cors:
allowed_origins: ["*"]
allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
cors:
allowed_origins:
- "http://localhost:3113"
- "http://127.0.0.1:3113"
allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@iii-config.yaml` around lines 15 - 17, Replace the permissive wildcard in
cors.allowed_origins with an explicit allowlist of trusted origins (e.g., local
dev and production viewer origins) and load them from a config/env variable
rather than using ["*"]; update iii-config.yaml's cors.allowed_origins to
reference that allowlist (keeping cors.allowed_methods as-is) so only approved
origins can call mutating endpoints.

Comment thread src/triggers/api.ts
Comment thread src/viewer/server.ts
Comment thread src/viewer/server.ts Outdated
Comment thread src/viewer/server.ts
- Restrict CORS allowed_origins to local dev origins instead of wildcard
- Return 404 instead of 200 when viewer HTML file is not found
- Add /agentmemory/viewer path to standalone viewer server routing
- Validate sessionId in session/end handler before KV operations
- Clear stale graph nodes/edges before writing rebuilt data
- Capture viewer server and close gracefully on shutdown
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/triggers/api.ts (1)

980-983: Consider pagination for GET /agentmemory/memories.

Line 980 currently loads and returns the full KV.memories scope, which can grow unbounded and degrade latency/payload size. Add limit/offset (or cursor) support and apply latest filtering after bounded fetch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 980 - 983, The handler for GET
/agentmemory/memories currently calls kv.list(KV.memories) and returns
everything; change it to accept pagination parameters (e.g., query params
"limit" and "offset" or "cursor") from req.query_params, parse and validate them
(apply a sensible max cap), call kv.list(KV.memories, { limit, cursor/offset })
to fetch a bounded page, then apply the existing latest filtering (the latest ?
memories.filter((m) => m.isLatest) : memories) to the page results and return
the page plus pagination metadata (e.g., nextCursor or total/offset) in the
response; reference kv.list, KV.memories, req.query_params, latest, and filtered
when making the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/viewer/server.ts`:
- Around line 29-33: The CORS config (constant CORS in server.ts) is too
permissive—replace the wildcard origin with an explicit allow-list (e.g., read
allowed origin(s) from an env var like VIEWER_ORIGIN or VIEWER_ALLOWED_ORIGINS
and validate the request Origin against that list in your request handler) and
return the matching Origin value in "Access-Control-Allow-Origin" instead of
"*"; also add the "Vary: Origin" header to the response so caches respect the
Origin-specific response. Locate the CORS constant and the request/response
handling around it (server.ts) and implement origin validation + dynamic header
setting; ensure headers still include methods/headers and consider adding
Access-Control-Allow-Credentials if cookies/auth are used.
- Around line 189-190: The current code writes memNode.properties.memoryId =
mem.id and memNode.properties.memoryType = mem.type which blindly overwrites
prior metadata when ensureNode (used to dedupe by type:name) maps multiple
memories to the same node; change these properties to support multiple values
and merge instead of overwrite: use memoryIds (array) and memoryTypes (array or
set) on memNode.properties, push mem.id / mem.type only if not already present,
and update any consumers to read the pluralized fields (memoryIds/memoryTypes)
or maintain backward-compatible single-value fallback handling in ensureNode /
memNode creation paths.

---

Nitpick comments:
In `@src/triggers/api.ts`:
- Around line 980-983: The handler for GET /agentmemory/memories currently calls
kv.list(KV.memories) and returns everything; change it to accept pagination
parameters (e.g., query params "limit" and "offset" or "cursor") from
req.query_params, parse and validate them (apply a sensible max cap), call
kv.list(KV.memories, { limit, cursor/offset }) to fetch a bounded page, then
apply the existing latest filtering (the latest ? memories.filter((m) =>
m.isLatest) : memories) to the page results and return the page plus pagination
metadata (e.g., nextCursor or total/offset) in the response; reference kv.list,
KV.memories, req.query_params, latest, and filtered when making the changes.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8cd0c66 and fee79dd.

📒 Files selected for processing (4)
  • iii-config.yaml
  • src/index.ts
  • src/triggers/api.ts
  • src/viewer/server.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/index.ts
  • iii-config.yaml

Comment thread src/viewer/server.ts Outdated
Comment thread src/viewer/server.ts Outdated
Replace static CORS wildcard with dynamic corsHeaders(req) that validates
the request Origin against an allowlist. Pass req to all json() call sites.
Change memNode.properties from scalar memoryId/memoryType to array-based
memoryIds/memoryTypes to prevent overwrite when multiple memories map to
the same graph node.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
src/viewer/server.ts (4)

590-593: Add radix to parseInt for clarity.

While the default behavior for "50" is correct, explicitly specifying radix 10 is a best practice to avoid edge cases with leading zeros.

-        limit: parseInt(params.get("limit") || "50"),
+        limit: parseInt(params.get("limit") || "50", 10),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/server.ts` around lines 590 - 593, The parseInt call used when
building the sdk.trigger payload should include an explicit radix; update the
parseInt invocation in the code that prepares the "limit" field (the line
invoking parseInt(params.get("limit") || "50")) to pass 10 as the second
argument so it reads parseInt(..., 10), ensuring decimal parsing of
params.get("limit") prior to calling sdk.trigger("mem::audit-query").

787-803: Consider extracting body parsing to reduce duplication.

The body parsing logic (lines 788-795) is nearly identical to the POST handler (lines 687-694). This is a minor duplication that could be extracted if more methods need body parsing.

♻️ Extract parseJsonBody helper

Add before handleApiRoute:

async function parseJsonBody(
  req: IncomingMessage,
  res: ServerResponse,
): Promise<Record<string, unknown> | null> {
  try {
    const raw = await readBody(req);
    return raw.trim() ? JSON.parse(raw) : {};
  } catch {
    json(res, 400, { error: "invalid JSON" }, req);
    return null;
  }
}

Then in handlers:

   if (method === "POST") {
-    let body: Record<string, unknown> = {};
-    try {
-      const raw = await readBody(req);
-      if (raw.trim()) body = JSON.parse(raw);
-    } catch {
-      json(res, 400, { error: "invalid JSON" }, req);
-      return;
-    }
+    const body = await parseJsonBody(req, res);
+    if (body === null) return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/server.ts` around lines 787 - 803, Extract the duplicated JSON
parsing into a helper (e.g., parseJsonBody) used by handlers in handleApiRoute:
move the try/readBody/JSON.parse + error-response (json(res, 400, { error:
"invalid JSON" }, req)) into parseJsonBody(req, res) which returns the parsed
Record<string, unknown> or null on error; replace the inline parsing blocks in
the DELETE branch (and the existing POST branch) to call parseJsonBody and
early-return when it returns null before calling
sdk.trigger("mem::governance-delete", body) so behavior and error responses
remain identical.

273-288: Sequential awaits in loops could be parallelized.

The delete and write operations are executed sequentially, which can be slow with many nodes/edges. Consider using Promise.all for better throughput:

♻️ Suggested parallelization
   const oldNodes = await kv.list<GraphNode>(KV.graphNodes).catch(() => []);
-  for (const old of oldNodes) {
-    await kv.delete(KV.graphNodes, old.id);
-  }
+  await Promise.all(oldNodes.map((old) => kv.delete(KV.graphNodes, old.id)));
   const oldEdges = await kv.list<GraphEdge>(KV.graphEdges).catch(() => []);
-  for (const old of oldEdges) {
-    await kv.delete(KV.graphEdges, old.id);
-  }
+  await Promise.all(oldEdges.map((old) => kv.delete(KV.graphEdges, old.id)));

   const nodes = [...nodeMap.values()];
-  for (const n of nodes) {
-    await kv.set(KV.graphNodes, n.id, n);
-  }
-  for (const e of edges) {
-    await kv.set(KV.graphEdges, e.id, e);
-  }
+  await Promise.all(nodes.map((n) => kv.set(KV.graphNodes, n.id, n)));
+  await Promise.all(edges.map((e) => kv.set(KV.graphEdges, e.id, e)));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/server.ts` around lines 273 - 288, The current sequence iterates
and awaits each kv.delete/kv.set one-by-one (using oldNodes/oldEdges and
nodes/edges), causing slow serial IO; change each loop to collect promises
(e.g., map oldNodes -> kv.delete(KV.graphNodes, old.id) and oldEdges ->
kv.delete(...), and map nodes/edges -> kv.set(...)) and then await Promise.all
on each promise array so deletes and writes run in parallel while preserving the
same remove-then-write ordering; update the blocks around kv.list, the for loops
over oldNodes/oldEdges, and the subsequent writes using Promise.all to improve
throughput.

234-271: Quadratic complexity in function-file relationship inference.

This nested loop iterates over all function nodes, then for each checks edges against all file nodes, then iterates over all other function nodes checking edges again. With large graphs this becomes O(funcNodes² × fileNodes × edges).

Consider building a lookup map for faster edge queries:

♻️ Suggested optimization using edge index
+  // Build edge index for O(1) lookups
+  const nodeEdges = new Map<string, Set<string>>();
+  for (const e of edges) {
+    if (!nodeEdges.has(e.sourceNodeId)) nodeEdges.set(e.sourceNodeId, new Set());
+    if (!nodeEdges.has(e.targetNodeId)) nodeEdges.set(e.targetNodeId, new Set());
+    nodeEdges.get(e.sourceNodeId)!.add(e.targetNodeId);
+    nodeEdges.get(e.targetNodeId)!.add(e.sourceNodeId);
+  }
+
   const fileNodes = [...nodeMap.values()].filter((n) => n.type === "file");
   const funcNodes = [...nodeMap.values()].filter((n) => n.type === "function");
+  const addedPairs = new Set<string>();
+
   for (const fn of funcNodes) {
+    const fnConnections = nodeEdges.get(fn.id) || new Set();
     for (const file of fileNodes) {
-      const hasEdge = edges.some(
-        (e) =>
-          (e.sourceNodeId === fn.id && e.targetNodeId === file.id) ||
-          (e.sourceNodeId === file.id && e.targetNodeId === fn.id),
-      );
-      if (!hasEdge) continue;
+      if (!fnConnections.has(file.id)) continue;
       for (const fn2 of funcNodes) {
         if (fn2.id === fn.id) continue;
-        const alsoTouches = edges.some(
-          (e) =>
-            (e.sourceNodeId === fn2.id && e.targetNodeId === file.id) ||
-            (e.sourceNodeId === file.id && e.targetNodeId === fn2.id),
-        );
+        const fn2Connections = nodeEdges.get(fn2.id) || new Set();
+        const alsoTouches = fn2Connections.has(file.id);
         if (alsoTouches) {
-          const exists = edges.some(
-            (e) =>
-              (e.sourceNodeId === fn.id && e.targetNodeId === fn2.id) ||
-              (e.sourceNodeId === fn2.id && e.targetNodeId === fn.id),
-          );
-          if (!exists) {
+          const pairKey = [fn.id, fn2.id].sort().join(":");
+          if (!addedPairs.has(pairKey) && !fnConnections.has(fn2.id)) {
+            addedPairs.add(pairKey);
             edges.push({
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/server.ts` around lines 234 - 271, The current triple-nested loops
over funcNodes, fileNodes and funcNodes cause O(func² × file × edges)
complexity; instead build an index from edges in a single pass (e.g., map fileId
-> Set of functionIds and a Set of existing pair keys for quick existence
checks) then iterate each file's function set and produce related pairs once;
use the same edge shape (id via gid("ge"), type "related_to", weight 0.5,
sourceObservationIds: [], createdAt: now) and check the existing-pair set before
pushing to edges to avoid duplicates (references: fileNodes, funcNodes, edges,
gid, createdAt, "related_to").
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/viewer/server.ts`:
- Around line 211-216: The code assigns string[] to memNode.properties.memoryIds
and memoryTypes but GraphNode.properties is typed as Record<string, string>;
either change the GraphNode interface to accept arrays (e.g., update
GraphNode.properties to Record<string, string | string[]> or a more specific
type that allows memoryIds: string[] and memoryTypes: string[]) or serialize the
arrays when assigning (e.g., set memoryIds = JSON.stringify(ids) and memoryTypes
= JSON.stringify(types)); update all usages of memNode.properties.memoryIds and
memoryTypes accordingly to parse JSON when needed or adjust types so downstream
code compiles.

---

Nitpick comments:
In `@src/viewer/server.ts`:
- Around line 590-593: The parseInt call used when building the sdk.trigger
payload should include an explicit radix; update the parseInt invocation in the
code that prepares the "limit" field (the line invoking
parseInt(params.get("limit") || "50")) to pass 10 as the second argument so it
reads parseInt(..., 10), ensuring decimal parsing of params.get("limit") prior
to calling sdk.trigger("mem::audit-query").
- Around line 787-803: Extract the duplicated JSON parsing into a helper (e.g.,
parseJsonBody) used by handlers in handleApiRoute: move the
try/readBody/JSON.parse + error-response (json(res, 400, { error: "invalid JSON"
}, req)) into parseJsonBody(req, res) which returns the parsed Record<string,
unknown> or null on error; replace the inline parsing blocks in the DELETE
branch (and the existing POST branch) to call parseJsonBody and early-return
when it returns null before calling sdk.trigger("mem::governance-delete", body)
so behavior and error responses remain identical.
- Around line 273-288: The current sequence iterates and awaits each
kv.delete/kv.set one-by-one (using oldNodes/oldEdges and nodes/edges), causing
slow serial IO; change each loop to collect promises (e.g., map oldNodes ->
kv.delete(KV.graphNodes, old.id) and oldEdges -> kv.delete(...), and map
nodes/edges -> kv.set(...)) and then await Promise.all on each promise array so
deletes and writes run in parallel while preserving the same remove-then-write
ordering; update the blocks around kv.list, the for loops over
oldNodes/oldEdges, and the subsequent writes using Promise.all to improve
throughput.
- Around line 234-271: The current triple-nested loops over funcNodes, fileNodes
and funcNodes cause O(func² × file × edges) complexity; instead build an index
from edges in a single pass (e.g., map fileId -> Set of functionIds and a Set of
existing pair keys for quick existence checks) then iterate each file's function
set and produce related pairs once; use the same edge shape (id via gid("ge"),
type "related_to", weight 0.5, sourceObservationIds: [], createdAt: now) and
check the existing-pair set before pushing to edges to avoid duplicates
(references: fileNodes, funcNodes, edges, gid, createdAt, "related_to").

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fee79dd and 710f3f1.

📒 Files selected for processing (1)
  • src/viewer/server.ts

Comment thread src/viewer/server.ts
Comment on lines +211 to +216
const ids = (memNode.properties.memoryIds as string[]) || [];
ids.push(mem.id);
memNode.properties.memoryIds = ids;
const types = (memNode.properties.memoryTypes as string[]) || [];
if (!types.includes(mem.type)) types.push(mem.type);
memNode.properties.memoryTypes = types;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find the GraphNode type definition
fd -t f "\.ts$|\.tsx$" | head -20

Repository: rohitg00/agentmemory

Length of output: 573


🏁 Script executed:

# Search for GraphNode definition
rg -t ts "interface GraphNode|type GraphNode" -A 5

Repository: rohitg00/agentmemory

Length of output: 238


🏁 Script executed:

# Look at the specific file and lines in context
cat -n src/viewer/server.ts | sed -n '200,280p'

Repository: rohitg00/agentmemory

Length of output: 3223


🏁 Script executed:

# Search for usages of memoryIds and memoryTypes
rg -t ts "memoryIds|memoryTypes" -B 2 -A 2

Repository: rohitg00/agentmemory

Length of output: 5378


🏁 Script executed:

# Get the complete GraphNode interface
cat -n src/types.ts | head -100

Repository: rohitg00/agentmemory

Length of output: 2677


🏁 Script executed:

# Search for all places where memoryIds or memoryTypes are read from properties
rg -t ts "properties\.memory(Ids|Types)" -B 2 -A 2

Repository: rohitg00/agentmemory

Length of output: 614


🏁 Script executed:

# Check if there's any code that reads memNode.properties.memoryIds
rg -t ts "memNode\.properties|\.properties\.memory" -B 3 -A 3

Repository: rohitg00/agentmemory

Length of output: 718


🏁 Script executed:

# Get the complete GraphNode interface definition
rg -t ts "interface GraphNode" -A 20

Repository: rohitg00/agentmemory

Length of output: 688


🏁 Script executed:

# Also check GraphEdge for context
rg -t ts "interface GraphEdge|interface Graph" -A 15

Repository: rohitg00/agentmemory

Length of output: 1549


🏁 Script executed:

# Search for any place where the graph data is read back (deserialization)
rg -t ts "kv\.get.*GraphNode|kv\.list.*GraphNode" -B 2 -A 5

Repository: rohitg00/agentmemory

Length of output: 5183


🏁 Script executed:

# Check if memoryIds or memoryTypes are ever read from memNode.properties after being set
rg -t ts "memNode" -A 50 src/viewer/server.ts | grep -A 50 "memoryIds\|memoryTypes"

Repository: rohitg00/agentmemory

Length of output: 2011


🏁 Script executed:

# Check if there's any code that reads back the graph nodes and accesses these properties
rg -t ts "\.properties\.memory"

Repository: rohitg00/agentmemory

Length of output: 367


🏁 Script executed:

# Look at the buildGraphFromData function more completely to see if these are read back
cat -n src/viewer/server.ts | sed -n '180,320p'

Repository: rohitg00/agentmemory

Length of output: 5211


🏁 Script executed:

# Search for any code that reads memoryIds or memoryTypes from stored graph nodes
rg -t ts "memoryIds|memoryTypes" -B 5 -A 5 | grep -v "src/viewer/server.ts"

Repository: rohitg00/agentmemory

Length of output: 7448


🏁 Script executed:

# Check the entire function to see the full context of the stored nodes
cat -n src/viewer/server.ts | sed -n '93,130p'

Repository: rohitg00/agentmemory

Length of output: 1311


🏁 Script executed:

# Check how data is serialized when stored in KV
cat -n src/viewer/server.ts | sed -n '282,290p'

Repository: rohitg00/agentmemory

Length of output: 382


🏁 Script executed:

# Check if the KV store does any automatic serialization
rg -t ts "kv\.set|kv\.get" -B 2 -A 2 src/viewer/server.ts | head -40

Repository: rohitg00/agentmemory

Length of output: 496


🏁 Script executed:

# Search for any deserialization or usage of these stored properties in tests or elsewhere
rg -t ts "memNode|\.properties" -B 3 -A 3 | grep -C 5 "memoryIds\|memoryTypes"

Repository: rohitg00/agentmemory

Length of output: 807


Type mismatch: assigning arrays to Record<string, string> properties.

GraphNode.properties is typed as Record<string, string>, but this code assigns string[] values to memoryIds and memoryTypes. While this works at runtime, it violates the type contract and could cause issues if downstream code expects string values.

Consider either updating the GraphNode interface to allow array values, or serializing the arrays:

Option 1: Serialize to JSON strings
-    const ids = (memNode.properties.memoryIds as string[]) || [];
-    ids.push(mem.id);
-    memNode.properties.memoryIds = ids;
-    const types = (memNode.properties.memoryTypes as string[]) || [];
-    if (!types.includes(mem.type)) types.push(mem.type);
-    memNode.properties.memoryTypes = types;
+    const ids: string[] = memNode.properties.memoryIds
+      ? JSON.parse(memNode.properties.memoryIds)
+      : [];
+    ids.push(mem.id);
+    memNode.properties.memoryIds = JSON.stringify(ids);
+    const types: string[] = memNode.properties.memoryTypes
+      ? JSON.parse(memNode.properties.memoryTypes)
+      : [];
+    if (!types.includes(mem.type)) types.push(mem.type);
+    memNode.properties.memoryTypes = JSON.stringify(types);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/server.ts` around lines 211 - 216, The code assigns string[] to
memNode.properties.memoryIds and memoryTypes but GraphNode.properties is typed
as Record<string, string>; either change the GraphNode interface to accept
arrays (e.g., update GraphNode.properties to Record<string, string | string[]>
or a more specific type that allows memoryIds: string[] and memoryTypes:
string[]) or serialize the arrays when assigning (e.g., set memoryIds =
JSON.stringify(ids) and memoryTypes = JSON.stringify(types)); update all usages
of memNode.properties.memoryIds and memoryTypes accordingly to parse JSON when
needed or adjust types so downstream code compiles.

@rohitg00 rohitg00 merged commit e2d8e70 into main Mar 4, 2026
1 check passed
@rohitg00 rohitg00 deleted the feat/viewer-ui-live-streaming branch March 4, 2026 04:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant