Move conversation persistence into workspace memory store#503
Move conversation persistence into workspace memory store#503senamakel merged 4 commits intotinyhumansai:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughReplaces frontend thread/message flows with RPC-backed memory calls, removes persisted Redux optimistic send flows, adds workspace-backed JSONL conversation store and RPC handlers, and registers an event-bus subscriber that persists channel turns into the workspace store. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/openhuman/memory/conversations/store.rs (2)
18-18: Global mutex serializes all workspace operations.
CONVERSATION_STORE_LOCKis a single global mutex shared across allConversationStoreinstances. If the application ever operates on multiple workspaces concurrently, this could become a contention point.For now this is likely acceptable given single-workspace usage, but consider per-workspace locking (e.g., keyed by
workspace_dir) if multi-workspace support becomes a requirement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/memory/conversations/store.rs` at line 18, CONVERSATION_STORE_LOCK is a single global Mutex that serializes all ConversationStore operations and will become a contention point for multiple workspaces; replace it with a keyed lock map (e.g., a static concurrent map from workspace_dir to a per-workspace Mutex or RwLock) and change ConversationStore access paths to acquire the lock for the specific workspace_dir key rather than the global CONVERSATION_STORE_LOCK; ensure the keyed map entry creation is thread-safe (use DashMap, once_cell + Mutex<HashMap<..>>, or similar) and that locks are cleaned up if needed to avoid unbounded growth.
114-143: Consider append-only patching for message updates.
update_messagecurrently reads all messages, patches one, and rewrites the entire JSONL file. For threads with many messages, this O(n) rewrite on every reaction toggle could become a performance bottleneck.Consider an append-only patch strategy (e.g., appending
{"op":"patch","message_id":"...","extra_metadata":{...}}entries) and reconstructing state on read, similar to howthreads.jsonlhandles upsert/delete operations. This would make updates O(1) at the cost of slightly more complex read logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/memory/conversations/store.rs` around lines 114 - 143, The current update_message in update_message reads all messages and calls rewrite_jsonl, causing O(n) writes; change it to append-only by writing a patch record (e.g., {"op":"patch","message_id":..., "extra_metadata":...}) to the same path returned by thread_messages_path(thread_id) instead of mutating the whole file, and update the reader logic (either read_jsonl or the thread read path that constructs ConversationMessage state) to replay base messages plus subsequent patch entries to reconstruct the current ConversationMessage state on read; ensure ConversationMessagePatch is serialized in the appended record format and that update_message returns the reconstructed updated ConversationMessage after appending.app/src/store/threadSlice.ts (1)
47-84: Use arrow helpers for the new cache utilities.These two module-level helpers are new code, and the repo convention here is
const+ arrow functions.♻️ Suggested refactor
-function appendMessageToCache( +const appendMessageToCache = ( state: ThreadState, threadId: string, message: ThreadMessage, replaceExisting = false -) { +) => { const existing = state.messagesByThreadId[threadId] ?? []; const nextStored = replaceExisting ? existing.map(entry => (entry.id === message.id ? message : entry)) @@ if (thread) { thread.messageCount = nextStored.length; thread.lastMessageAt = nextStored.length > 0 ? nextStored[nextStored.length - 1].createdAt : thread.createdAt; } -} +}; -function replaceMessagesForThread(state: ThreadState, threadId: string, messages: ThreadMessage[]) { +const replaceMessagesForThread = ( + state: ThreadState, + threadId: string, + messages: ThreadMessage[] +) => { state.messagesByThreadId[threadId] = messages; if (threadId === state.selectedThreadId) { state.messages = messages; @@ if (thread) { thread.messageCount = messages.length; thread.lastMessageAt = messages.length > 0 ? messages[messages.length - 1].createdAt : thread.createdAt; } -} +};As per coding guidelines,
**/*.{js,jsx,ts,tsx}: "Use const by default, let if reassignment is needed, avoid var" and "Prefer arrow functions over function declarations".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/store/threadSlice.ts` around lines 47 - 84, Convert the two function declarations appendMessageToCache and replaceMessagesForThread into const arrow function helpers to follow the repo convention (use const + arrow functions); specifically, replace "function appendMessageToCache(...)" with "const appendMessageToCache = (...) => { ... }" and "function replaceMessagesForThread(...)" with "const replaceMessagesForThread = (...) => { ... }" while preserving all parameter names, logic, and references (state, threadId, message, messages, replaceExisting) so callers and exports remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/pages/Conversations.tsx`:
- Around line 276-289: The current useEffect uses a .then() chain after
dispatch(createThreadLocal(...)); convert this to async/await by defining an
async IIFE inside useEffect (or an async inner function) and await
dispatch(createThreadLocal({...})) before calling
dispatch(setSelectedThread(DEFAULT_THREAD_ID)) and await
dispatch(loadThreadMessages(DEFAULT_THREAD_ID)); keep the initial void
dispatch(loadThreads()); call as-is or await it if desired, and wrap the awaited
calls in try/catch to handle errors; update references: useEffect, loadThreads,
createThreadLocal, setSelectedThread, loadThreadMessages, DEFAULT_THREAD_ID,
DEFAULT_THREAD_TITLE.
In `@app/src/services/api/threadApi.ts`:
- Around line 12-21: The unwrapEnvelope function currently treats an envelope
with a missing data field as a successful response; update unwrapEnvelope<T> to
detect envelope-level failures by checking for an "error" property on the
response object and, if present (or if data is undefined/null when an envelope
was expected), throw an Error (or rethrow the embedded error message) so the
thunk rejects instead of returning undefined; preserve behavior for raw
non-envelope responses by returning the value when the object is not an
envelope. Ensure you update the Envelope<T> handling in unwrapEnvelope and any
places calling unwrapEnvelope to expect a thrown error for envelope failures.
In `@src/openhuman/memory/rpc_models.rs`:
- Around line 186-193: The PurgeConversationThreadsResponse struct's
agent_messages_deleted field is being incorrectly populated with
stats.message_count (same as messages_deleted); locate the code that constructs
PurgeConversationThreadsResponse (the mapping where agent_messages_deleted:
stats.message_count is set) and either replace stats.message_count with the
correct agent-specific counter from the stats object (e.g.,
stats.agent_message_count or similar) or remove the agent_messages_deleted field
from the response and struct if no distinct agent count exists; update the
PurgeConversationThreadsResponse definition and all construction sites
(referencing the struct name PurgeConversationThreadsResponse and the
agent_messages_deleted identifier) to keep the types and serialization
consistent.
---
Nitpick comments:
In `@app/src/store/threadSlice.ts`:
- Around line 47-84: Convert the two function declarations appendMessageToCache
and replaceMessagesForThread into const arrow function helpers to follow the
repo convention (use const + arrow functions); specifically, replace "function
appendMessageToCache(...)" with "const appendMessageToCache = (...) => { ... }"
and "function replaceMessagesForThread(...)" with "const
replaceMessagesForThread = (...) => { ... }" while preserving all parameter
names, logic, and references (state, threadId, message, messages,
replaceExisting) so callers and exports remain unchanged.
In `@src/openhuman/memory/conversations/store.rs`:
- Line 18: CONVERSATION_STORE_LOCK is a single global Mutex that serializes all
ConversationStore operations and will become a contention point for multiple
workspaces; replace it with a keyed lock map (e.g., a static concurrent map from
workspace_dir to a per-workspace Mutex or RwLock) and change ConversationStore
access paths to acquire the lock for the specific workspace_dir key rather than
the global CONVERSATION_STORE_LOCK; ensure the keyed map entry creation is
thread-safe (use DashMap, once_cell + Mutex<HashMap<..>>, or similar) and that
locks are cleaned up if needed to avoid unbounded growth.
- Around line 114-143: The current update_message in update_message reads all
messages and calls rewrite_jsonl, causing O(n) writes; change it to append-only
by writing a patch record (e.g., {"op":"patch","message_id":...,
"extra_metadata":...}) to the same path returned by
thread_messages_path(thread_id) instead of mutating the whole file, and update
the reader logic (either read_jsonl or the thread read path that constructs
ConversationMessage state) to replay base messages plus subsequent patch entries
to reconstruct the current ConversationMessage state on read; ensure
ConversationMessagePatch is serialized in the appended record format and that
update_message returns the reconstructed updated ConversationMessage after
appending.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d9164a30-bbc9-4781-a5f3-b426dd2a4500
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockapp/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
app/src/pages/Conversations.tsxapp/src/services/api/threadApi.test.tsapp/src/services/api/threadApi.tsapp/src/store/index.tsapp/src/store/threadSlice.tssrc/openhuman/memory/conversations/mod.rssrc/openhuman/memory/conversations/store.rssrc/openhuman/memory/conversations/types.rssrc/openhuman/memory/mod.rssrc/openhuman/memory/ops.rssrc/openhuman/memory/rpc_models.rssrc/openhuman/memory/schemas.rs
| useEffect(() => { | ||
| const defaultThread = threads.find(t => t.id === DEFAULT_THREAD_ID); | ||
|
|
||
| if (!defaultThread) { | ||
| dispatch( | ||
| createThreadLocal({ | ||
| id: DEFAULT_THREAD_ID, | ||
| title: DEFAULT_THREAD_TITLE, | ||
| createdAt: new Date().toISOString(), | ||
| }) | ||
| ); | ||
| } | ||
|
|
||
| // Always set selected thread to ensure messages view is synced from persisted storage | ||
| dispatch(setSelectedThread(DEFAULT_THREAD_ID)); | ||
| void dispatch(loadThreads()); | ||
| void dispatch( | ||
| createThreadLocal({ | ||
| id: DEFAULT_THREAD_ID, | ||
| title: DEFAULT_THREAD_TITLE, | ||
| createdAt: new Date().toISOString(), | ||
| }) | ||
| ).then(() => { | ||
| dispatch(setSelectedThread(DEFAULT_THREAD_ID)); | ||
| void dispatch(loadThreadMessages(DEFAULT_THREAD_ID)); | ||
| }); | ||
| // eslint-disable-next-line react-hooks/exhaustive-deps | ||
| }, [dispatch]); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Convert .then() chain to async/await.
Per coding guidelines, prefer async/await for promises instead of .then() chains. Consider wrapping the initialization in an async IIFE:
useEffect(() => {
- void dispatch(loadThreads());
- void dispatch(
- createThreadLocal({
- id: DEFAULT_THREAD_ID,
- title: DEFAULT_THREAD_TITLE,
- createdAt: new Date().toISOString(),
- })
- ).then(() => {
- dispatch(setSelectedThread(DEFAULT_THREAD_ID));
- void dispatch(loadThreadMessages(DEFAULT_THREAD_ID));
- });
+ void (async () => {
+ void dispatch(loadThreads());
+ await dispatch(
+ createThreadLocal({
+ id: DEFAULT_THREAD_ID,
+ title: DEFAULT_THREAD_TITLE,
+ createdAt: new Date().toISOString(),
+ })
+ );
+ dispatch(setSelectedThread(DEFAULT_THREAD_ID));
+ void dispatch(loadThreadMessages(DEFAULT_THREAD_ID));
+ })();
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [dispatch]);As per coding guidelines: "Always use async/await for promises in TypeScript instead of .then() chains."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| const defaultThread = threads.find(t => t.id === DEFAULT_THREAD_ID); | |
| if (!defaultThread) { | |
| dispatch( | |
| createThreadLocal({ | |
| id: DEFAULT_THREAD_ID, | |
| title: DEFAULT_THREAD_TITLE, | |
| createdAt: new Date().toISOString(), | |
| }) | |
| ); | |
| } | |
| // Always set selected thread to ensure messages view is synced from persisted storage | |
| dispatch(setSelectedThread(DEFAULT_THREAD_ID)); | |
| void dispatch(loadThreads()); | |
| void dispatch( | |
| createThreadLocal({ | |
| id: DEFAULT_THREAD_ID, | |
| title: DEFAULT_THREAD_TITLE, | |
| createdAt: new Date().toISOString(), | |
| }) | |
| ).then(() => { | |
| dispatch(setSelectedThread(DEFAULT_THREAD_ID)); | |
| void dispatch(loadThreadMessages(DEFAULT_THREAD_ID)); | |
| }); | |
| // eslint-disable-next-line react-hooks/exhaustive-deps | |
| }, [dispatch]); | |
| useEffect(() => { | |
| void (async () => { | |
| void dispatch(loadThreads()); | |
| await dispatch( | |
| createThreadLocal({ | |
| id: DEFAULT_THREAD_ID, | |
| title: DEFAULT_THREAD_TITLE, | |
| createdAt: new Date().toISOString(), | |
| }) | |
| ); | |
| dispatch(setSelectedThread(DEFAULT_THREAD_ID)); | |
| void dispatch(loadThreadMessages(DEFAULT_THREAD_ID)); | |
| })(); | |
| // eslint-disable-next-line react-hooks/exhaustive-deps | |
| }, [dispatch]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/pages/Conversations.tsx` around lines 276 - 289, The current
useEffect uses a .then() chain after dispatch(createThreadLocal(...)); convert
this to async/await by defining an async IIFE inside useEffect (or an async
inner function) and await dispatch(createThreadLocal({...})) before calling
dispatch(setSelectedThread(DEFAULT_THREAD_ID)) and await
dispatch(loadThreadMessages(DEFAULT_THREAD_ID)); keep the initial void
dispatch(loadThreads()); call as-is or await it if desired, and wrap the awaited
calls in try/catch to handle errors; update references: useEffect, loadThreads,
createThreadLocal, setSelectedThread, loadThreadMessages, DEFAULT_THREAD_ID,
DEFAULT_THREAD_TITLE.
| interface Envelope<T> { | ||
| data?: T; | ||
| } | ||
|
|
||
| function unwrapEnvelope<T>(response: Envelope<T> | T): T { | ||
| if (response && typeof response === 'object' && 'data' in response) { | ||
| return (response as Envelope<T>).data as T; | ||
| } | ||
| return response as T; | ||
| } |
There was a problem hiding this comment.
Don’t treat envelope failures as successful responses.
unwrapEnvelope ignores the backend error field and will return undefined when data is absent. The slice’s fulfilled reducers dereference these results immediately, so an envelope-level RPC failure turns into a later Cannot read properties of undefined instead of a rejected thunk.
🐛 Suggested fix
interface Envelope<T> {
data?: T;
+ error?: {
+ code?: string;
+ message: string;
+ } | null;
}
-function unwrapEnvelope<T>(response: Envelope<T> | T): T {
- if (response && typeof response === 'object' && 'data' in response) {
- return (response as Envelope<T>).data as T;
+const unwrapEnvelope = <T,>(response: Envelope<T> | T): T => {
+ if (response && typeof response === 'object' && ('data' in response || 'error' in response)) {
+ const envelope = response as Envelope<T>;
+ if (envelope.error) {
+ throw new Error(envelope.error.message);
+ }
+ if (typeof envelope.data === 'undefined') {
+ throw new Error('RPC response did not include data');
+ }
+ return envelope.data;
}
return response as T;
-}
+};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| interface Envelope<T> { | |
| data?: T; | |
| } | |
| function unwrapEnvelope<T>(response: Envelope<T> | T): T { | |
| if (response && typeof response === 'object' && 'data' in response) { | |
| return (response as Envelope<T>).data as T; | |
| } | |
| return response as T; | |
| } | |
| interface Envelope<T> { | |
| data?: T; | |
| error?: { | |
| code?: string; | |
| message: string; | |
| } | null; | |
| } | |
| const unwrapEnvelope = <T,>(response: Envelope<T> | T): T => { | |
| if (response && typeof response === 'object' && ('data' in response || 'error' in response)) { | |
| const envelope = response as Envelope<T>; | |
| if (envelope.error) { | |
| throw new Error(envelope.error.message); | |
| } | |
| if (typeof envelope.data === 'undefined') { | |
| throw new Error('RPC response did not include data'); | |
| } | |
| return envelope.data; | |
| } | |
| return response as T; | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/services/api/threadApi.ts` around lines 12 - 21, The unwrapEnvelope
function currently treats an envelope with a missing data field as a successful
response; update unwrapEnvelope<T> to detect envelope-level failures by checking
for an "error" property on the response object and, if present (or if data is
undefined/null when an envelope was expected), throw an Error (or rethrow the
embedded error message) so the thunk rejects instead of returning undefined;
preserve behavior for raw non-envelope responses by returning the value when the
object is not an envelope. Ensure you update the Envelope<T> handling in
unwrapEnvelope and any places calling unwrapEnvelope to expect a thrown error
for envelope failures.
| /// Response payload for purging all workspace-backed conversations. | ||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||
| #[serde(rename_all = "camelCase")] | ||
| pub struct PurgeConversationThreadsResponse { | ||
| pub messages_deleted: usize, | ||
| pub agent_threads_deleted: usize, | ||
| pub agent_messages_deleted: usize, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how PurgeConversationThreadsResponse is constructed from ConversationPurgeStats
rg -n -A 10 'PurgeConversationThreadsResponse' --type rustRepository: tinyhumansai/openhuman
Length of output: 2807
🏁 Script executed:
rg -n 'struct ConversationPurgeStats' --type rust -A 5Repository: tinyhumansai/openhuman
Length of output: 469
Fix agent_messages_deleted mapping in PurgeConversationThreadsResponse.
agent_messages_deleted is incorrectly assigned the same value as messages_deleted (both use stats.message_count). The field names suggest they should represent distinct counts—either provide a separate stat for agent messages or remove the field if unneeded.
In ops.rs lines 1126-1130, change the mapping from:
agent_messages_deleted: stats.message_count,
to the correct value, or remove this field entirely if not required.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/memory/rpc_models.rs` around lines 186 - 193, The
PurgeConversationThreadsResponse struct's agent_messages_deleted field is being
incorrectly populated with stats.message_count (same as messages_deleted);
locate the code that constructs PurgeConversationThreadsResponse (the mapping
where agent_messages_deleted: stats.message_count is set) and either replace
stats.message_count with the correct agent-specific counter from the stats
object (e.g., stats.agent_message_count or similar) or remove the
agent_messages_deleted field from the response and struct if no distinct agent
count exists; update the PurgeConversationThreadsResponse definition and all
construction sites (referencing the struct name PurgeConversationThreadsResponse
and the agent_messages_deleted identifier) to keep the types and serialization
consistent.
…channel events. Update domain subscriber registration to include workspace directory, ensuring proper message handling and persistence across channels. Refactor event structure to include message ID and reply target for improved tracking. Additionally, adjust module visibility for context management.
There was a problem hiding this comment.
🧹 Nitpick comments (3)
src/openhuman/memory/conversations/bus.rs (2)
148-150: Minor: Prefer&Pathover&PathBuffor function parameters.Using
&Pathis more idiomatic as it accepts both&Pathand&PathBufwithout requiring the caller to have aPathBuf.📝 Suggested change
fn persist_channel_turn( - workspace_dir: &PathBuf, + workspace_dir: &Path, descriptor: ChannelTurnDescriptor<'_>, ) -> Result<(), String> {This would require adding
use std::path::Path;to the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/memory/conversations/bus.rs` around lines 148 - 150, Change the persist_channel_turn signature to accept workspace_dir: &Path instead of &PathBuf and add use std::path::Path to imports; update any call sites of persist_channel_turn (they can continue passing &PathBuf as &Path will coerce) to match the new signature and adjust any internal code that relied on PathBuf-specific methods to either dereference or call .to_path_buf() where necessary.
175-186: Consider a more efficient duplicate check for high-volume scenarios.The current implementation reads all messages from the thread file on every persist call to check for duplicates. This is O(n) per message and could become slow for very long-running conversations.
For typical channel message rates this is acceptable, but if you anticipate high-volume use, consider caching recently-persisted message IDs or using an index file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/memory/conversations/bus.rs` around lines 175 - 186, The duplicate check currently calls get_messages(workspace_dir.clone(), &thread_id) and scans every message to compare message.id against persisted_message_id (built from descriptor.role and descriptor.message_id), which is O(n) per persist; replace this with a more efficient approach by introducing a per-thread index of persisted IDs (e.g., an on-disk index file or an in-memory HashSet cached in a ThreadsIndex or ConversationStore) and update that index when you append a new message so subsequent calls to the persist routine can check membership in O(1) instead of iterating all messages; modify the code paths around the persist function that build persisted_message_id and the call sites that use get_messages to consult the new index (and fall back to scanning on cache-miss or index corruption) and ensure thread-safe access when updating the index.src/core/jsonrpc.rs (1)
837-838: Consider updating the log message to include conversations.The log message lists registered subscribers but doesn't mention the newly added conversation persistence subscriber.
📝 Suggested log message update
log::info!( - "[event_bus] webhook, channel, health, skill, and restart subscribers registered" + "[event_bus] webhook, channel, health, skill, conversation, and restart subscribers registered" );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/jsonrpc.rs` around lines 837 - 838, Update the log::info call that currently prints "[event_bus] webhook, channel, health, skill, and restart subscribers registered" to also mention the conversation persistence subscriber (e.g., include "conversation(s)" or "conversation persistence" in the message); locate the log::info invocation in jsonrpc.rs that emits the event_bus subscriber registration message and modify the string to list conversations alongside webhook, channel, health, skill, and restart so the log accurately reflects the newly added subscriber.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/core/jsonrpc.rs`:
- Around line 837-838: Update the log::info call that currently prints
"[event_bus] webhook, channel, health, skill, and restart subscribers
registered" to also mention the conversation persistence subscriber (e.g.,
include "conversation(s)" or "conversation persistence" in the message); locate
the log::info invocation in jsonrpc.rs that emits the event_bus subscriber
registration message and modify the string to list conversations alongside
webhook, channel, health, skill, and restart so the log accurately reflects the
newly added subscriber.
In `@src/openhuman/memory/conversations/bus.rs`:
- Around line 148-150: Change the persist_channel_turn signature to accept
workspace_dir: &Path instead of &PathBuf and add use std::path::Path to imports;
update any call sites of persist_channel_turn (they can continue passing
&PathBuf as &Path will coerce) to match the new signature and adjust any
internal code that relied on PathBuf-specific methods to either dereference or
call .to_path_buf() where necessary.
- Around line 175-186: The duplicate check currently calls
get_messages(workspace_dir.clone(), &thread_id) and scans every message to
compare message.id against persisted_message_id (built from descriptor.role and
descriptor.message_id), which is O(n) per persist; replace this with a more
efficient approach by introducing a per-thread index of persisted IDs (e.g., an
on-disk index file or an in-memory HashSet cached in a ThreadsIndex or
ConversationStore) and update that index when you append a new message so
subsequent calls to the persist routine can check membership in O(1) instead of
iterating all messages; modify the code paths around the persist function that
build persisted_message_id and the call sites that use get_messages to consult
the new index (and fall back to scanning on cache-miss or index corruption) and
ensure thread-safe access when updating the index.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 21b2c7ad-b860-47b3-a2a3-2612bc45c27b
📒 Files selected for processing (7)
src/core/jsonrpc.rssrc/openhuman/channels/mod.rssrc/openhuman/channels/runtime/dispatch.rssrc/openhuman/channels/runtime/startup.rssrc/openhuman/event_bus/events.rssrc/openhuman/memory/conversations/bus.rssrc/openhuman/memory/conversations/mod.rs
✅ Files skipped from review due to trivial changes (2)
- src/openhuman/channels/mod.rs
- src/openhuman/memory/conversations/mod.rs
Summary
memoryRPC controllers for listing, creating, appending, updating, deleting, and purging conversation threads/messages.Problem
Solution
src/openhuman/memory/conversations/.threads.jsonland store each thread's messages in its own JSONL log under the workspace memory directory.Conversations.tsx.Submission Checklist
app/) and/orcargo test(core) for logic you add or changeapp/test/e2e, mock backend,tests/json_rpc_e2e.rsas appropriate)//////!(Rust), JSDoc or brief file/module headers (TS) on public APIs and non-obvious modulesImpact
Related
Summary by CodeRabbit
New Features
Tests
Refactor