fix: cloud sync deletion, memory issues, and agent prompt optimization#242
fix: cloud sync deletion, memory issues, and agent prompt optimization#242
Conversation
- Fix resolveLocalDeleted equal case to delete instead of download - Reduce deletion sync debounce from 300ms to 50ms for faster propagation - Add error classification in memory indexing: ENOENT redirects to delete, 4xx triggers inline_text fallback then server-side cleanup - Remove redundant Memory tab search overlay (global search covers it) - Implement stale-while-revalidate for Memory tab data loading
- Fix concurrent refresh race: use ref counter so only the latest refresh invocation clears the refreshing indicator - Move setDataCache out of React setState updater to avoid brief Zustand/React state desync - Scheduler: use Math.min(existingDeadline, newDeadline) so a longer debounce event cannot cancel a shorter one already scheduled - Bump deletion debounce from 50ms to 100ms to avoid premature sync during chokidar rename event splitting
…back - Replace duck-typed status check with instanceof WorkspaceContentApiError to avoid accidentally classifying non-HTTP errors as non-retryable - Re-read file content when falling back from sync_object_ref to inline_text to ensure fresh data instead of potentially stale earlier read
…nalFacts Extract merged array from setPersonalFacts functional updater and pass the same reference to setDataCache, eliminating the stale closure over personalFacts that could cause cache/state divergence after concurrent operations.
- Exclude 401/408/429 from non-retryable classification so auth refresh races and rate limiting get proper retry instead of permanent failure - Reset all cached payload fields when scopeKey changes to prevent stale data from previous workspace leaking into the new scope
- Fallback inline_text catch now checks isNonRetryable: transient errors (network/5xx) get scheduled for retry instead of dropping the task - Use personalFactsRef to read latest facts in loadMorePersonalFacts, avoiding reliance on functional updater synchronous execution which is not guaranteed under React 18 concurrent rendering
…1/429 - Compose scopeKey as vaultPath + user.id to prevent cross-account data leak when switching accounts on the same vault - Exclude 401 (auth refresh), 408 (timeout), 429 (rate limit) from non-retryable classification so transient failures get proper retry - Allow retry for transient inline_text fallback failures instead of immediately dropping the indexing task
…versational style - Rename agent identity from Mory to Mimi in core prompt - Add execution loop qualifier: skip workflow for greetings/simple questions - Add response style rule #8: match depth to complexity - Add vibe line: read the room, respond accordingly - Translate compaction prompt to English with legacy 【会话摘要】 compat - Change compaction language directive to "primary language of the conversation" - Translate all tool descriptions, Zod .describe(), and error messages to English across 14 tool files (subagent, task, web, file, search) - Add test for legacy summary prefix backward compatibility
…on-and-memory-issues # Conflicts: # apps/moryflow/pc/src/renderer/workspace/components/memory/use-memory-page.ts
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2c2a2984d5
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| '<conversation>\n'; | ||
| const SUMMARY_PROMPT_SUFFIX = '\n</conversation>'; |
There was a problem hiding this comment.
Use non-colliding markers for compaction transcript
Switching the transcript wrapper to <conversation>...</conversation> makes delimiter collisions much easier in real chats (for example, when users paste XML/HTML containing </conversation>), and buildSummaryPrompt concatenates raw history text without escaping, so that token can prematurely terminate the “data-only” section and let trailing transcript text be interpreted as fresh instructions during summary generation. This is a regression from the previous rare marker and can weaken compaction safety/summary accuracy for those inputs.
Useful? React with 👍 / 👎.
Summary
【会话摘要】prefix detectionTest plan
packages/agents-runtime: 133 tests pass (including new legacy prefix compat test)packages/agents-tools: 33 tests pass