feat: optimize memory extraction for concise output and precise retrieval#549
feat: optimize memory extraction for concise output and precise retrieval#549lishixiang0705 wants to merge 1 commit intovolcengine:mainfrom
Conversation
…eval - Prompt (memory_extraction.yaml): - Add explicit length targets for abstract (~50-80 chars) and content (2-4 sentences) - Add good/bad examples showing concise vs verbose memory patterns - Guide LLM to split multi-topic memories into separate atomic items - Emphasize fact-dense 'sticky note' style over narrative expansion - Vectorization (memory_extractor.py): - Use abstract instead of content for embedding generation - Shorter text produces more discriminative vectors, improving retrieval precision - Reduces score clustering (e.g., 0.18-0.21 all similar) by focusing embeddings Background: In production, extracted memories averaged 500-2000 chars per item, causing: 1. Embedding vector dilution — any query fuzzy-matches long content 2. Poor score discrimination — relevant and irrelevant items score similarly 3. Context bloat — 5 injected memories could exceed 5000 chars per turn After this change, new memories will be shorter and more atomic, and vector search will match on focused abstract text rather than diluted content.
|
lishixiang seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
|
Thanks for the work on this — the overall direction makes sense. Two things worth checking before merging:
|
| **✅ GOOD** (split into separate memories): | ||
| Memory 1 [entities]: `Lossless-Claw (LCM v0.2.8): OpenClaw LLM 压缩插件,Martian-Engineering 开发,2026-03-10 已禁用。` | ||
| Memory 2 [cases]: `LCM 禁用原因:estimateTokens 对 CJK 低估 3x;assemble() 无预算控制致注入膨胀。` | ||
| Memory 3 [entities]: `Meridian: LCM 后继,复用~1200行。SQLite+FTS5、tiktoken、三区预算硬分配。` |
There was a problem hiding this comment.
there are few-shot examples below, these GOOD BAD examples seems redundant or misplaced.
| - preferences: `## Preference Domain` / `## Specific Preferences` | ||
| - entities: `## Basic Info` / `## Core Attributes` | ||
| - events: `## Decision Content` / `## Reason` / `## Result` | ||
| - cases: `## Problem` / `## Solution` |
There was a problem hiding this comment.
these few lines prompt optimization is great. will possibly cherry pick then merge if no further update.
… fix Bundles three in-flight contributor PRs (#533, #549, #951) with reviewer feedback addressed, consolidated into a single set of focused edits. memory_extraction.yaml (#549): - Add length targets to the Three-Level Structure section: abstract ~50-80 chars, overview 3-5 bullets, content 2-4 sentences. - Kept the concise guidance Zayn approved; dropped the BAD/GOOD content example blocks he flagged as redundant with the few-shot examples below, and kept all text in English per yangxinxin-7's language-mixing concern. memory_merge_bundle.yaml (#533): - Add facet coherence check: same category is not sufficient to merge; memories covering different facets (e.g. Python code style + food preference) must output {"decision": "skip"}. - Add hard length limits: abstract ≤ 80, overview ≤ 200, content ≤ 300. - Switch merge strategy from accumulate-all to condensed snapshot: on conflict keep newer value; do not retain superseded details. - Bump template version 1.0.0 → 2.0.0. memory_extractor.py (#549): - Vectorize on `abstract or content` instead of `content`. Shorter text yields more discriminative embeddings and reduces score clustering. semantic_processor.py + model_retry.py (#951): - Fix memory semantic queue stall: _process_memory_directory() had two silent early-return paths (ls failure, write_file failure) that let on_dequeue() hit report_success() while the work actually failed — telemetry got marked_failed, but the queue's in_progress counter and processed count treated the message as done. Re-raise as RuntimeError so on_dequeue routes to report_error(). - Classify local filesystem errors (FileNotFoundError, PermissionError, IsADirectoryError, NotADirectoryError — including chained __cause__) as "permanent" in classify_api_error, so a bad path fails the queue entry instead of being re-enqueued forever. Tests: - tests/utils/test_circuit_breaker.py: cover the four filesystem error types and a chained FileNotFoundError. - tests/storage/test_memory_semantic_stall.py: exercise on_dequeue through the real classifier — ls failure must hit on_error, empty dir must still hit on_success (no regression).
|
Cherry-picked into #1522 for batch merge along with #533 and #951. What I kept:
What I dropped, addressing review feedback:
I did not update the existing Few-shot Examples to match the new length targets (@yangxinxin-7's first point) — that felt like a separate scope change worth handling in a follow-up so this PR stays small. Memory-quality effect (embedding score spread, retrieval precision) is pending eval on the bundled PR. Thanks @lishixiang0705. |
…lish examples Addresses review feedback on top of the cherry-picked #549 commit: - Remove the BAD/GOOD content example blocks — they duplicate the Few-shot Examples section immediately below them (ZaynJarvis's inline comment on #549). - Restore English values in the L0 bullet examples; the Chinese values introduced by #549 would bias `output_language: auto` for non-Chinese users (yangxinxin-7's inline comment on #549). Keeps the substantive contribution from #549: the length targets (~50-80 chars / 3-5 bullets / 2-4 sentences) and the vectorize-on- abstract switch in memory_extractor.py. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
Following up on the earlier cherry-pick attempt — closed #1522 and re-did the cherry-pick properly so your original commit is preserved via |
Problem
In production, extracted memories average 500-2000 chars per item, causing:
Solution
1. Prompt optimization (
memory_extraction.yaml)2. Vectorization improvement (
memory_extractor.py)abstractinstead ofcontentfor embedding generationabstract or contentensures no empty embeddingsExpected Impact
Files Changed
openviking/prompts/templates/compression/memory_extraction.yaml— prompt templateopenviking/session/memory_extractor.py— 2 lines:set_vectorizetext source