Skip to content

hew/llm-errors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Error Formatter - Proof of Concept

Transform JavaScript/TypeScript errors into LLM-optimized context for AI-powered debugging.

What This Does

Instead of cryptic stack traces, get rich context that LLMs can actually understand:

{
  "summary": {
    "error": "Cannot read properties of null (reading 'name')",
    "likelyCause": "Attempting to access property on null value",
    "fixComplexity": "trivial"
  },
  "code": {
    "failingLine": "return user.name.toUpperCase();",
    "surroundingLines": ["// context", "// more context"],
    "interpretation": "The code is trying to access 'user.name' but user is null"
  },
  "diagnostics": {
    "nullCheckNeeded": true,
    "suggestedFixes": [
      "Add null check: if (user && user.name)",
      "Use optional chaining: user?.name"
    ]
  },
  "data": {
    "functionArguments": [null],
    "capturedVariables": {"user": null}
  }
}

Key Features

  1. Smart Serialization - Objects are serialized with type hints for LLMs
  2. Code Context - Shows surrounding lines and interprets what went wrong
  3. Data Flow Tracking - Shows how data moved through your functions
  4. Fix Suggestions - Pattern matching for common errors
  5. Express Middleware - Drop-in integration for web apps

Usage

Basic Function Wrapping

import { llmWrap } from 'llm-errors';

const processUser = llmWrap(function(user) {
  return user.name.toUpperCase(); // Will capture context if this fails
});

Express Middleware

import { llmErrorMiddleware } from 'llm-errors';

app.use(llmErrorMiddleware({
  enabled: true,
  outputFormat: 'json'
}));

Advanced Handler

import { createLLMErrorHandler } from 'llm-errors';

const handler = createLLMErrorHandler({
  captureLocals: true,
  includeSuggestions: true,
  redactPatterns: [/api[_-]?key/gi, /password/gi]
});

const wrapped = handler.wrap(myFunction, 'myFunction');

Architecture

Core Components

  1. Context Capture (context-capture.ts)

    • Uses V8 inspector API to capture local variables
    • Falls back to Error.prepareStackTrace for basic context
    • Extracts call frames with function arguments
  2. LLM Formatter (llm-formatter.ts)

    • Serializes complex objects for LLM consumption
    • Interprets errors based on patterns
    • Tracks execution flow and data mutations
    • Generates fix suggestions
  3. Integration Layer (index.ts)

    • Function wrapping with automatic error handling
    • Express middleware integration
    • Async context tracking via AsyncLocalStorage

Technical Innovation

V8 Inspector Integration

We attempted to use the V8 inspector protocol to capture runtime context:

  • Set breakpoints at error locations
  • Extract local variable values
  • Capture full execution state

Smart Object Serialization

Objects are serialized with metadata for LLMs:

// Instead of: [object Object]
// We get:
{
  type: 'Promise',
  status: 'pending',
  hint: 'Missing await?'
}

Pattern-Based Error Interpretation

Common error patterns are automatically interpreted:

  • Null/undefined access → Suggests null checks
  • Promise without await → Suggests adding await
  • Type mismatches → Shows expected vs actual types

Future Enhancements

  1. Build-Time Transform - TypeScript transformer for zero runtime cost
  2. AI Service Integration - Direct API calls to Claude/GPT
  3. Error Learning - Learn from fixed errors to improve suggestions
  4. IDE Plugin - Click "Debug with AI" in VSCode
  5. Production Sampling - Smart sampling to minimize overhead

Running the Demo

# Install dependencies
npm install

# Run test suite
npm run test

# Run Express demo server
npm run demo

# Build TypeScript
npm run build

Why This Matters

Current debugging with AI involves:

  1. Copy error message
  2. Paste to AI
  3. AI asks "what's in the user object?"
  4. Go back, add console.log
  5. Copy new output...

With LLM Error Formatter:

  1. Error happens
  2. Full context automatically captured
  3. AI has everything needed to help immediately

Status

This is a proof of concept demonstrating:

  • ✅ Core error formatting for LLMs
  • ✅ Express middleware integration
  • ✅ Smart object serialization
  • ✅ Pattern-based error interpretation
  • ⚠️ V8 inspector context capture (needs refinement)
  • 🔄 Production-ready performance optimizations (TODO)

Contributing

This is an experimental project exploring how to make errors more AI-friendly. Key areas for contribution:

  • Improve V8 inspector integration
  • Add more error patterns
  • Performance optimizations
  • Additional framework integrations

About

Transform JavaScript/TypeScript errors into LLM-optimized context for AI-powered debugging

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published