Utilities for working with LLM outputs.
Currently this crate focuses on turning "fuzzy" JSON-like text (common in LLM responses) into
real serde_json::Value.
- Fuzzy JSON parsing: Handle markdown code fences, single-quoted strings, unquoted keys, Python literals (
True/False/None), trailing commas - Partial JSON parsing: Auto-complete incomplete JSON structures (useful for streaming)
- Streaming parser: Accumulate chunks and parse at any point
use llmx::json::parse_fuzzy_json;
let v = parse_fuzzy_json("```json\n{'a': True, b: None}\n```").unwrap();
assert_eq!(v["a"], true);
assert!(v["b"].is_null());use llmx::json::StreamingJsonParser;
let mut parser = StreamingJsonParser::new();
// Simulate streaming input
parser.push(r#"{"name":"#);
let v = parser.parse_partial().unwrap();
assert!(v.is_object());
parser.push(r#""Alice","age":30}"#);
let v = parser.parse_partial().unwrap();
assert_eq!(v["name"], "Alice");
assert_eq!(v["age"], 30);Install via npm:
npm install llmxParse fuzzy JSON-like text into a JavaScript value. Handles:
- Markdown code fences (
```json ... ```) - Single-quoted strings
- Unquoted object keys
- Python literals (
True/False/None) - Trailing commas
import { parseFuzzyJson } from "llmx";
const value = parseFuzzyJson("```json\n{'a': True, b: None}\n```");
console.log(value); // { a: true, b: null }Parse partial/incomplete JSON, automatically completing unclosed structures. Useful for streaming LLM outputs.
import { parsePartialJson } from "llmx";
const value = parsePartialJson('{"name": "Alice", "age":');
console.log(value); // { name: "Alice", age: null }Parse partial JSON and return both the value and completion status.
import { parsePartialJsonWithStatus } from "llmx";
const result = parsePartialJsonWithStatus('{"name": "Alice"}');
console.log(result.value); // { name: "Alice" }
console.log(result.isComplete); // true
const partial = parsePartialJsonWithStatus('{"name": "Ali');
console.log(partial.value); // { name: "Ali" }
console.log(partial.isComplete); // falseStreaming JSON parser for incremental LLM output parsing.
import { StreamingJsonParser } from "llmx";
const parser = new StreamingJsonParser();
// Simulate streaming input
parser.push('{"name":');
console.log(parser.parsePartial()); // { name: null }
parser.push('"Alice","age":30}');
console.log(parser.parsePartial()); // { name: "Alice", age: 30 }
// Check completion status
const { value, isComplete } = parser.parsePartialWithStatus();
console.log(isComplete); // true
// Access buffer contents
console.log(parser.buffer()); // '{"name":"Alice","age":30}'
// Clear for reuse
parser.clear();| Method | Description |
|---|---|
new StreamingJsonParser() |
Create a new parser instance |
push(chunk: string) |
Append a chunk of data to the buffer |
buffer(): string |
Get the current buffer contents |
clear() |
Clear the parser buffer |
parsePartial(): any |
Parse buffer with auto-completion |
parsePartialWithStatus(): { value: any; isComplete: boolean } |
Parse with completion status |
parseComplete(): any |
Parse as complete JSON (no auto-completion, throws if incomplete) |