Transform JavaScript/TypeScript errors into LLM-optimized context for AI-powered debugging.
Instead of cryptic stack traces, get rich context that LLMs can actually understand:
{
"summary": {
"error": "Cannot read properties of null (reading 'name')",
"likelyCause": "Attempting to access property on null value",
"fixComplexity": "trivial"
},
"code": {
"failingLine": "return user.name.toUpperCase();",
"surroundingLines": ["// context", "// more context"],
"interpretation": "The code is trying to access 'user.name' but user is null"
},
"diagnostics": {
"nullCheckNeeded": true,
"suggestedFixes": [
"Add null check: if (user && user.name)",
"Use optional chaining: user?.name"
]
},
"data": {
"functionArguments": [null],
"capturedVariables": {"user": null}
}
}
- Smart Serialization - Objects are serialized with type hints for LLMs
- Code Context - Shows surrounding lines and interprets what went wrong
- Data Flow Tracking - Shows how data moved through your functions
- Fix Suggestions - Pattern matching for common errors
- Express Middleware - Drop-in integration for web apps
import { llmWrap } from 'llm-errors';
const processUser = llmWrap(function(user) {
return user.name.toUpperCase(); // Will capture context if this fails
});
import { llmErrorMiddleware } from 'llm-errors';
app.use(llmErrorMiddleware({
enabled: true,
outputFormat: 'json'
}));
import { createLLMErrorHandler } from 'llm-errors';
const handler = createLLMErrorHandler({
captureLocals: true,
includeSuggestions: true,
redactPatterns: [/api[_-]?key/gi, /password/gi]
});
const wrapped = handler.wrap(myFunction, 'myFunction');
-
Context Capture (
context-capture.ts
)- Uses V8 inspector API to capture local variables
- Falls back to Error.prepareStackTrace for basic context
- Extracts call frames with function arguments
-
LLM Formatter (
llm-formatter.ts
)- Serializes complex objects for LLM consumption
- Interprets errors based on patterns
- Tracks execution flow and data mutations
- Generates fix suggestions
-
Integration Layer (
index.ts
)- Function wrapping with automatic error handling
- Express middleware integration
- Async context tracking via AsyncLocalStorage
We attempted to use the V8 inspector protocol to capture runtime context:
- Set breakpoints at error locations
- Extract local variable values
- Capture full execution state
Objects are serialized with metadata for LLMs:
// Instead of: [object Object]
// We get:
{
type: 'Promise',
status: 'pending',
hint: 'Missing await?'
}
Common error patterns are automatically interpreted:
- Null/undefined access → Suggests null checks
- Promise without await → Suggests adding await
- Type mismatches → Shows expected vs actual types
- Build-Time Transform - TypeScript transformer for zero runtime cost
- AI Service Integration - Direct API calls to Claude/GPT
- Error Learning - Learn from fixed errors to improve suggestions
- IDE Plugin - Click "Debug with AI" in VSCode
- Production Sampling - Smart sampling to minimize overhead
# Install dependencies
npm install
# Run test suite
npm run test
# Run Express demo server
npm run demo
# Build TypeScript
npm run build
Current debugging with AI involves:
- Copy error message
- Paste to AI
- AI asks "what's in the user object?"
- Go back, add console.log
- Copy new output...
With LLM Error Formatter:
- Error happens
- Full context automatically captured
- AI has everything needed to help immediately
This is a proof of concept demonstrating:
- ✅ Core error formatting for LLMs
- ✅ Express middleware integration
- ✅ Smart object serialization
- ✅ Pattern-based error interpretation
⚠️ V8 inspector context capture (needs refinement)- 🔄 Production-ready performance optimizations (TODO)
This is an experimental project exploring how to make errors more AI-friendly. Key areas for contribution:
- Improve V8 inspector integration
- Add more error patterns
- Performance optimizations
- Additional framework integrations