Conversation
| messages = [ | ||
| { | ||
| role: "assistant", | ||
| content: cleanContent(stripPrivateContent(payload.last_assistant_message)), |
There was a problem hiding this comment.
Bug (medium): fallback path can ingest empty [assistant] memories
The cleanContent(stripPrivateContent(...)) call can return an empty string when last_assistant_message consists entirely of injected tags (e.g. a <system-reminder> or <supermemory-context> block). In that case messages is still set to a one-element array, so messages.length === 0 is never true and capture proceeds with empty content. SupermemoryClient.formatConversationMessage() will then store a meaningless [assistant] placeholder memory.
Fix: guard the push with an empty-content check, the same way parseTranscript does:
const cleaned = cleanContent(stripPrivateContent(payload.last_assistant_message));
if (cleaned) {
messages = [{ role: "assistant", content: cleaned }];
newLamHash = currentHash;
}|
|
||
| export function cleanContent(content: string): string { | ||
| return content | ||
| .replace(/<system-reminder>[\s\S]*?<\/system-reminder>/gi, "") |
There was a problem hiding this comment.
Bug (medium): cleanContent() silently truncates legitimate user/assistant content
The regexes strip every <system-reminder>…</system-reminder> and <supermemory-context>…</supermemory-context> block from any message, regardless of who wrote it or why. In a coding assistant context — especially in this very repository — users and assistants can legitimately discuss, paste, or generate those exact tag names as code examples or documentation. When that happens the stored memory is silently truncated or emptied, which is data loss with no warning.
The stripping should be scoped to actual injected wrapper content only. One approach is to apply cleanContent() solely to the outermost message envelope (i.e. before splitting into individual turns) rather than to every individual turn's text, so that tags embedded inside a turn's prose are preserved.
No description provided.