-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat: add experimental LLM response repair for malformed JSON #10482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This PR adds an experimental feature to automatically repair malformed JSON responses from LLMs that struggle with strict output formats. Changes: - Add llmResponseRepair experimental setting - Create JSON repair utility (repair-json.ts) with support for: - Trailing commas - Single quotes to double quotes conversion - Unquoted keys - Missing closing brackets/braces - Prefixed text stripping - Markdown code block extraction - Integrate repair into NativeToolCallParser as fallback when JSON.parse fails - Add comprehensive test suite for the repair utility Closes #10481
Reviewed this PR. Found 1 issue that needs to be addressed before merging.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
| /** | ||
| * Configuration for LLM response repair feature. | ||
| * When enabled, attempts to repair malformed JSON from LLM responses. | ||
| * @see Issue #10481 | ||
| */ | ||
| private static repairEnabled = false | ||
|
|
||
| /** | ||
| * Enable or disable the LLM response repair feature. | ||
| * This should be called when the experimental setting changes. | ||
| */ | ||
| public static setRepairEnabled(enabled: boolean): void { | ||
| this.repairEnabled = enabled | ||
| } | ||
|
|
||
| /** | ||
| * Check if the repair feature is enabled. | ||
| */ | ||
| public static isRepairEnabled(): boolean { | ||
| return this.repairEnabled | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The setRepairEnabled method is defined but never called anywhere in the codebase. This means the experimental setting llmResponseRepair will have no effect - even if a user enables it, repairEnabled will always remain false. Other experiments in the codebase use a different pattern: they check experiments.isEnabled(state?.experiments ?? {}, EXPERIMENT_IDS.XXX) at runtime when the feature is needed. Either wire up setRepairEnabled() to be called when the experimental setting changes (e.g., in the extension activation or state update path), or refactor to check the experiment state dynamically at runtime like other experiments do.
Fix it with Roo Code or mention @roomote and request a fix.
|
This is an improvement but as described in #10481 BAML would be more robust and extendable for future formats and LLM's Also BAML will not require rust compilation as BAML has packaged binaries and an npm typescript package that would make implementation relatively easy. |
This PR attempts to address Issue #10481.
Summary
Adds an experimental feature to automatically repair malformed JSON responses from LLMs that struggle with strict output formats (e.g., Grok Code).
Changes
New Experimental Setting
llmResponseRepairto the experiments system, disabled by defaultJSON Repair Utility (
src/utils/repair-json.ts)Repairs common JSON malformations including:
Integration
NativeToolCallParserto use the repair utility as a fallback when JSON.parse fails (only when the experiment is enabled)Tests
Notes
This is a simpler alternative to full BAML integration (which would require Rust compilation) and focuses specifically on the
attempt_completionand tool call issues mentioned in the issue. The pattern-based approach handles the most common malformed JSON patterns seen in LLM outputs.Feedback and guidance are welcome!