Skip to content

feat: optimize memory extraction for concise output and precise retrieval#549

Open
lishixiang0705 wants to merge 1 commit intovolcengine:mainfrom
lishixiang0705:feat/optimize-memory-extraction
Open

feat: optimize memory extraction for concise output and precise retrieval#549
lishixiang0705 wants to merge 1 commit intovolcengine:mainfrom
lishixiang0705:feat/optimize-memory-extraction

Conversation

@lishixiang0705
Copy link
Copy Markdown

Problem

In production, extracted memories average 500-2000 chars per item, causing:

  1. Embedding vector dilution — any query fuzzy-matches long content, scores cluster in 0.18-0.21 range
  2. Poor retrieval discrimination — relevant and irrelevant items score similarly
  3. Context bloat — 5 injected memories can exceed 5000 chars (~3000 tokens) per turn

Solution

1. Prompt optimization (memory_extraction.yaml)

  • Add explicit length targets: abstract ~50-80 chars, content 2-4 sentences
  • Add good/bad examples showing concise vs verbose patterns
  • Guide LLM to split multi-topic memories into separate atomic items
  • Emphasize fact-dense "sticky note" style over narrative expansion

2. Vectorization improvement (memory_extractor.py)

  • Use abstract instead of content for embedding generation
  • Shorter text → more focused vectors → better cosine similarity discrimination
  • Fallback: abstract or content ensures no empty embeddings

Expected Impact

  • Memory size: 500-2000 chars → 100-300 chars per item
  • Injection cost: ~3000 tokens/turn → ~600 tokens/turn (80% reduction)
  • Retrieval precision: score spread from 0.03 → 0.15+ between relevant/irrelevant

Files Changed

  • openviking/prompts/templates/compression/memory_extraction.yaml — prompt template
  • openviking/session/memory_extractor.py — 2 lines: set_vectorize text source

…eval

- Prompt (memory_extraction.yaml):
  - Add explicit length targets for abstract (~50-80 chars) and content (2-4 sentences)
  - Add good/bad examples showing concise vs verbose memory patterns
  - Guide LLM to split multi-topic memories into separate atomic items
  - Emphasize fact-dense 'sticky note' style over narrative expansion

- Vectorization (memory_extractor.py):
  - Use abstract instead of content for embedding generation
  - Shorter text produces more discriminative vectors, improving retrieval precision
  - Reduces score clustering (e.g., 0.18-0.21 all similar) by focusing embeddings

Background:
  In production, extracted memories averaged 500-2000 chars per item, causing:
  1. Embedding vector dilution — any query fuzzy-matches long content
  2. Poor score discrimination — relevant and irrelevant items score similarly
  3. Context bloat — 5 injected memories could exceed 5000 chars per turn

  After this change, new memories will be shorter and more atomic, and
  vector search will match on focused abstract text rather than diluted content.
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


lishixiang seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@yangxinxin-7
Copy link
Copy Markdown
Collaborator

Thanks for the work on this — the overall direction makes sense. Two things worth checking before merging:

  1. Prompt inconsistency
    The Three-Level Structure section now describes L2 as "2-4 sentences", but the # Few-shot Examples section is unchanged and
    still shows verbose narrative-style content. Since LLMs tend to follow examples more than instructions, the new guidance
    may have limited effect until the few-shot examples are updated to match.

  2. Language mixing in examples
    The new ✅ GOOD examples are in Chinese while the rest of the prompt is in English. This may unintentionally bias output
    language for non-Chinese users when output_language is "auto".

**✅ GOOD** (split into separate memories):
Memory 1 [entities]: `Lossless-Claw (LCM v0.2.8): OpenClaw LLM 压缩插件,Martian-Engineering 开发,2026-03-10 已禁用。`
Memory 2 [cases]: `LCM 禁用原因:estimateTokens 对 CJK 低估 3x;assemble() 无预算控制致注入膨胀。`
Memory 3 [entities]: `Meridian: LCM 后继,复用~1200行。SQLite+FTS5、tiktoken、三区预算硬分配。`
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are few-shot examples below, these GOOD BAD examples seems redundant or misplaced.

- preferences: `## Preference Domain` / `## Specific Preferences`
- entities: `## Basic Info` / `## Core Attributes`
- events: `## Decision Content` / `## Reason` / `## Result`
- cases: `## Problem` / `## Solution`
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these few lines prompt optimization is great. will possibly cherry pick then merge if no further update.

ZaynJarvis added a commit that referenced this pull request Apr 17, 2026
… fix

Bundles three in-flight contributor PRs (#533, #549, #951) with reviewer
feedback addressed, consolidated into a single set of focused edits.

memory_extraction.yaml (#549):
- Add length targets to the Three-Level Structure section: abstract
  ~50-80 chars, overview 3-5 bullets, content 2-4 sentences.
- Kept the concise guidance Zayn approved; dropped the BAD/GOOD content
  example blocks he flagged as redundant with the few-shot examples
  below, and kept all text in English per yangxinxin-7's language-mixing
  concern.

memory_merge_bundle.yaml (#533):
- Add facet coherence check: same category is not sufficient to merge;
  memories covering different facets (e.g. Python code style + food
  preference) must output {"decision": "skip"}.
- Add hard length limits: abstract ≤ 80, overview ≤ 200, content ≤ 300.
- Switch merge strategy from accumulate-all to condensed snapshot: on
  conflict keep newer value; do not retain superseded details.
- Bump template version 1.0.0 → 2.0.0.

memory_extractor.py (#549):
- Vectorize on `abstract or content` instead of `content`. Shorter text
  yields more discriminative embeddings and reduces score clustering.

semantic_processor.py + model_retry.py (#951):
- Fix memory semantic queue stall: _process_memory_directory() had two
  silent early-return paths (ls failure, write_file failure) that let
  on_dequeue() hit report_success() while the work actually failed —
  telemetry got marked_failed, but the queue's in_progress counter and
  processed count treated the message as done. Re-raise as RuntimeError
  so on_dequeue routes to report_error().
- Classify local filesystem errors (FileNotFoundError, PermissionError,
  IsADirectoryError, NotADirectoryError — including chained __cause__)
  as "permanent" in classify_api_error, so a bad path fails the queue
  entry instead of being re-enqueued forever.

Tests:
- tests/utils/test_circuit_breaker.py: cover the four filesystem error
  types and a chained FileNotFoundError.
- tests/storage/test_memory_semantic_stall.py: exercise on_dequeue
  through the real classifier — ls failure must hit on_error, empty dir
  must still hit on_success (no regression).
@ZaynJarvis
Copy link
Copy Markdown
Collaborator

Cherry-picked into #1522 for batch merge along with #533 and #951.

What I kept:

  • The length targets on the Three-Level Structure section (~50-80 / 3-5 bullets / 2-4 sentences) — this is the core of the prompt optimization.
  • The set_vectorize(abstract or content) switch in memory_extractor.py.

What I dropped, addressing review feedback:

  • The BAD/GOOD content example blocks — redundant with the Few-shot Examples section below (my earlier inline comment).
  • The Chinese-mixed example values — keeps the prompt English-only so output_language: auto doesn't get biased for non-Chinese users (@yangxinxin-7's point).

I did not update the existing Few-shot Examples to match the new length targets (@yangxinxin-7's first point) — that felt like a separate scope change worth handling in a follow-up so this PR stays small.

Memory-quality effect (embedding score spread, retrieval precision) is pending eval on the bundled PR. Thanks @lishixiang0705.

ZaynJarvis added a commit that referenced this pull request Apr 17, 2026
…lish examples

Addresses review feedback on top of the cherry-picked #549 commit:

- Remove the BAD/GOOD content example blocks — they duplicate the
  Few-shot Examples section immediately below them (ZaynJarvis's inline
  comment on #549).
- Restore English values in the L0 bullet examples; the Chinese values
  introduced by #549 would bias `output_language: auto` for non-Chinese
  users (yangxinxin-7's inline comment on #549).

Keeps the substantive contribution from #549: the length targets
(~50-80 chars / 3-5 bullets / 2-4 sentences) and the vectorize-on-
abstract switch in memory_extractor.py.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
@ZaynJarvis
Copy link
Copy Markdown
Collaborator

Following up on the earlier cherry-pick attempt — closed #1522 and re-did the cherry-pick properly so your original commit is preserved via git cherry-pick (lishixiang authorship intact). Bundled with #533 in #1530 — memory effect eval pending before merge. I also dropped the inline BAD/GOOD content blocks and reverted the L0 bullet examples to the original English strings per earlier review feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Backlog

Development

Successfully merging this pull request may close these issues.

4 participants