Skip to content

Add context resurfacing guidance to draft planning#13

Merged
leeovery merged 1 commit intomainfrom
claude/add-context-resurfacing-0117Vc7j9tMDvJ2yzfuQnzxT
Dec 12, 2025
Merged

Add context resurfacing guidance to draft planning#13
leeovery merged 1 commit intomainfrom
claude/add-context-resurfacing-0117Vc7j9tMDvJ2yzfuQnzxT

Conversation

@leeovery
Copy link
Copy Markdown
Owner

Encourage Claude to pause and resurface previously-logged topics when
new context emerges during draft planning. This ensures information
discovered later in reference material doesn't silently slip past
topics that could be affected.

@leeovery leeovery force-pushed the claude/add-context-resurfacing-0117Vc7j9tMDvJ2yzfuQnzxT branch from 7a56ca1 to 140d32c Compare December 12, 2025 08:15
Encourage Claude to resurface previously-logged topics when new context
emerges. Standard workflow still applies - re-present full topic with
changes noted, get approval before updating the draft.
@leeovery leeovery force-pushed the claude/add-context-resurfacing-0117Vc7j9tMDvJ2yzfuQnzxT branch from 140d32c to 563bcb8 Compare December 12, 2025 08:18
@leeovery leeovery merged commit 9a63fd9 into main Dec 12, 2025
@leeovery leeovery deleted the claude/add-context-resurfacing-0117Vc7j9tMDvJ2yzfuQnzxT branch December 12, 2025 08:32
leeovery added a commit that referenced this pull request Apr 20, 2026
…fig null unset

Three small correctness/style fixes bundled:

- OpenAIProvider.embed() now throws a descriptive error if OpenAI
  returns an empty data array instead of a cryptic TypeError reading
  res.data[0].embedding. Unlikely in practice but costs nothing to
  guard. (deferred #13)

- OpenAIProvider.embedBatch() no longer mutates res.data in place.
  Spread before sort. (deferred #12)

- config.loadConfig() now treats explicit null in system/project config
  as an unset sentinel, letting a project config clear a system-level
  default. (deferred #14)

Closes deferred-issues #12, #13, #14.
leeovery added a commit that referenced this pull request Apr 21, 2026
embedBatch assumed res.data.length === requested length. If OpenAI
returned fewer rows (rare but not impossible — partial availability,
certain rate-limit interactions), results[offset + i] for missing
indices stayed undefined. Downstream: doc.embedding = undefined,
Orama inserted the chunk with no embedding field, chunk silently
degraded to keyword-only with no warning.

embed() had this guard already (deferred-issue #13); embedBatch was
missed.

Added length check after each _fetch in both branches of embedBatch:
- Single-batch (texts.length <= MAX_BATCH_SIZE): validate
  res.data.length === texts.length.
- Chunked (>MAX_BATCH_SIZE): validate per slice.
Error message includes requested vs received counts and chunk offset
when applicable.

Tests: two new cases in test-knowledge-openai.cjs — short response
(2 rows for 3 inputs) and missing data array both assert rejection
with /response length mismatch/ pattern. Confirmed both fail on
pre-fix code by reverting the guards.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants