A comprehensive TypeScript middleware library for building robust multi-provider LLM backends. Currently supports Ollama and Anthropic Claude, with OpenAI and Google planned. Features advanced JSON cleaning, logging, error handling, and more.
π Table of Contents
- ποΈ Clean Architecture: Base classes and interfaces for scalable AI applications
- π€ Multi-Provider Architecture: Extensible provider system with strategy pattern
- β Ollama: Fully supported with comprehensive parameter control
- β Anthropic Claude: Complete support for Claude models (Opus, Sonnet, Haiku)
- π OpenAI, Google: Planned for future releases
- π Pluggable: Easy to add custom providers - see LLM Providers Guide
- π§Ή JSON Cleaning: Recipe-based JSON repair system with automatic strategy selection
- β¨ v2.4.0: Enhanced array extraction support - properly handles JSON arrays
[...]in addition to objects{...}
- β¨ v2.4.0: Enhanced array extraction support - properly handles JSON arrays
- π¨ FlatFormatter System: Advanced data formatting for LLM consumption
- π Comprehensive Logging: Multi-level logging with metadata support
- βοΈ Configuration Management: Flexible model and application configuration
- π‘οΈ Error Handling: Robust error handling and recovery mechanisms
- π§ TypeScript First: Full type safety throughout the entire stack
- π¦ Modular Design: Use only what you need
- π§ͺ Testing Ready: Includes example implementations and test utilities
Install from npm:
npm install @loonylabs/llm-middlewareOr install directly from GitHub:
npm install github:loonylabs-dev/llm-middlewareOr using a specific version/tag:
npm install github:loonylabs-dev/llm-middleware#v1.3.0import { BaseAIUseCase, BaseAIRequest, BaseAIResult, LLMProvider } from '@loonylabs/llm-middleware';
// Define your request/response interfaces
interface MyRequest extends BaseAIRequest<string> {
message: string;
}
interface MyResult extends BaseAIResult {
response: string;
}
// Create your use case (uses Ollama by default)
class MyChatUseCase extends BaseAIUseCase<string, MyRequest, MyResult> {
protected readonly systemMessage = "You are a helpful assistant.";
// Required: return user message template function
protected getUserTemplate(): (formattedPrompt: string) => string {
return (message) => message;
}
protected formatUserMessage(prompt: any): string {
return typeof prompt === 'string' ? prompt : prompt.message;
}
protected createResult(content: string, usedPrompt: string, thinking?: string): MyResult {
return {
generatedContent: content,
model: this.modelConfig.name,
usedPrompt: usedPrompt,
thinking: thinking,
response: content
};
}
}
// Switch to different provider (optional)
class MyAnthropicChatUseCase extends MyChatUseCase {
protected getProvider(): LLMProvider {
return LLMProvider.ANTHROPIC; // Use Claude instead of Ollama
}
}π Using the Multi-Provider Architecture
import { llmService, LLMProvider, ollamaProvider, anthropicProvider } from '@loonylabs/llm-middleware';
// Option 1: Use the LLM Service orchestrator (recommended for flexibility)
const response1 = await llmService.call(
"Write a haiku about coding",
{
provider: LLMProvider.OLLAMA, // Explicitly specify provider
model: "llama2",
temperature: 0.7
}
);
// Use Anthropic Claude
const response2 = await llmService.call(
"Explain quantum computing",
{
provider: LLMProvider.ANTHROPIC,
model: "claude-3-5-sonnet-20241022",
authToken: process.env.ANTHROPIC_API_KEY,
maxTokens: 1024,
temperature: 0.7
}
);
// Option 2: Use provider directly for provider-specific features
const response3 = await ollamaProvider.callWithSystemMessage(
"Write a haiku about coding",
"You are a creative poet",
{
model: "llama2",
temperature: 0.7,
// Ollama-specific parameters
repeat_penalty: 1.1,
top_k: 40
}
);
// Or use Anthropic provider directly
const response4 = await anthropicProvider.call(
"Write a haiku about coding",
{
model: "claude-3-5-sonnet-20241022",
authToken: process.env.ANTHROPIC_API_KEY,
maxTokens: 1024
}
);
// Set default provider for your application
llmService.setDefaultProvider(LLMProvider.OLLAMA);
// Now calls use Ollama by default
const response5 = await llmService.call("Hello!", { model: "llama2" });For more details on the multi-provider system, see the LLM Providers Guide.
π Advanced Example with FlatFormatter
import {
FlatFormatter,
personPreset
} from '@loonylabs/llm-middleware';
class ProfileGeneratorUseCase extends BaseAIUseCase {
protected readonly systemMessage = `You are a professional profile creator.
IMPORTANT: Respond with ONLY valid JSON following this schema:
{
"name": "Person name",
"title": "Professional title",
"summary": "Brief professional overview",
"skills": "Key skills and expertise",
"achievements": "Notable accomplishments"
}`;
// Use FlatFormatter and presets for rich context building
protected formatUserMessage(prompt: any): string {
const { person, preferences, guidelines } = prompt;
const contextSections = [
// Use preset for structured data
personPreset.formatForLLM(person, "## PERSON INFO:"),
// Use FlatFormatter for custom structures
`## PREFERENCES:\n${FlatFormatter.flatten(preferences, {
format: 'bulleted',
keyValueSeparator: ': '
})}`,
// Format guidelines with FlatFormatter
`## GUIDELINES:\n${FlatFormatter.flatten(
guidelines.map(g => ({
guideline: g,
priority: "MUST FOLLOW"
})),
{
format: 'numbered',
entryTitleKey: 'guideline',
ignoredKeys: ['guideline']
}
)}`
];
return contextSections.join('\n\n');
}
protected createResult(content: string, usedPrompt: string, thinking?: string): MyResult {
return {
generatedContent: content,
model: this.modelConfig.name,
usedPrompt,
thinking,
profile: JSON.parse(content)
};
}
}
// Use it
const profileGen = new ProfileGeneratorUseCase();
const result = await profileGen.execute({
prompt: {
person: { name: "Alice", occupation: "Engineer" },
preferences: { tone: "professional", length: "concise" },
guidelines: ["Highlight technical skills", "Include leadership"]
},
authToken: "optional-token"
});π¦ Required Dependencies
- Node.js 18+
- TypeScript 4.9+
- LLM Provider configured (e.g., Ollama server for Ollama provider)
π§ Environment Setup
Create a .env file in your project root:
# Server Configuration
PORT=3000
NODE_ENV=development
# Logging
LOG_LEVEL=info
# LLM Provider Configuration
MODEL1_NAME=phi3:mini # Required: Your model name
MODEL1_URL=http://localhost:11434 # Optional: Defaults to localhost (Ollama)
MODEL1_TOKEN=optional-auth-token # Optional: For authenticated providers
# Anthropic API Configuration (Optional)
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Your Anthropic API key
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022 # Default Claude modelMulti-Provider Support: The middleware is fully integrated with Ollama and Anthropic Claude. Support for OpenAI and Google is planned. See the LLM Providers Guide for details on the provider system and how to use or add providers.
The middleware follows Clean Architecture principles:
src/
βββ middleware/
β βββ controllers/base/ # Base HTTP controllers
β βββ usecases/base/ # Base AI use cases
β βββ services/ # External service integrations
β β βββ llm/ # LLM provider services (Ollama, OpenAI, etc.)
β β βββ json-cleaner/ # JSON repair and validation
β β βββ response-processor/ # AI response processing
β βββ shared/ # Common utilities and types
β βββ config/ # Configuration management
β βββ types/ # TypeScript interfaces
β βββ utils/ # Utility functions
βββ examples/ # Example implementations
βββ simple-chat/ # Basic chat example
- Getting Started Guide
- Architecture Overview
- LLM Providers Guide - Multi-provider architecture and how to use different LLM services
- LLM Provider Parameters - Ollama-specific parameter reference and presets
- Request Formatting Guide - FlatFormatter vs RequestFormatterService
- Performance Monitoring - Metrics and logging
- API Reference
- Examples
- CHANGELOG - Release notes and breaking changes
The middleware includes comprehensive test suites covering unit tests, integration tests, robustness tests, and end-to-end workflows.
# Build the middleware first
npm run build
# Run all automated tests
npm run test:all
# Run unit tests only
npm run test:unitπ For complete testing documentation, see tests/README.md
The test documentation includes:
- π Quick reference table for all tests
- π Detailed test descriptions and prerequisites
β οΈ Troubleshooting guide- π¬ Development workflow best practices
π¬ Demonstrating Token Limiting with Social Media Content
The Tweet Generator example showcases parameter configuration for controlling output length:
import { TweetGeneratorUseCase } from '@loonylabs/llm-middleware';
const tweetGenerator = new TweetGeneratorUseCase();
const result = await tweetGenerator.execute({
prompt: 'The importance of clean code in software development'
});
console.log(result.tweet); // Generated tweet
console.log(result.characterCount); // Character count
console.log(result.withinLimit); // true if β€ 280 charsKey Features:
- π― Token Limiting: Uses
maxTokens: 70to limit output to ~280 characters (provider-agnostic!) - π Character Validation: Automatically checks if output is within Twitter's limit
- π¨ Marketing Preset: Optimized parameters for engaging, concise content
- β Testable: Integration test verifies parameter effectiveness
Parameter Configuration:
protected getParameterOverrides(): ModelParameterOverrides {
return {
// β
NEW in v2.7.0: Provider-agnostic maxTokens (recommended)
maxTokens: 70, // Works for Anthropic, OpenAI, Ollama, Google
// Parameter tuning
temperatureOverride: 0.7,
repeatPenalty: 1.3,
frequencyPenalty: 0.3,
presencePenalty: 0.2,
topP: 0.9,
topK: 50,
repeatLastN: 32
};
}
// π‘ Legacy Ollama-specific approach (still works):
protected getParameterOverrides(): ModelParameterOverrides {
return {
num_predict: 70, // Ollama-specific (deprecated)
// ... other params
};
}This example demonstrates:
- How to configure parameters for specific output requirements
- Token limiting as a practical use case
- Validation and testing of parameter effectiveness
- Real-world application (social media content generation)
See src/examples/tweet-generator/ for full implementation.
π Quick Example Setup
Run the included examples:
# Clone the repository
git clone https://github.com/loonylabs-dev/llm-middleware.git
cd llm-middleware
# Install dependencies
npm install
# Copy environment template
cp .env.example .env
# Start your LLM provider (example for Ollama)
ollama serve
# Run the example
npm run devTest the API:
curl -X POST http://localhost:3000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you?"}'π§Ή Recipe-Based JSON Cleaning System
Advanced JSON repair with automatic strategy selection and modular operations:
import { JsonCleanerService, JsonCleanerFactory } from '@loonylabs/llm-middleware';
// Simple usage (async - uses new recipe system with fallback)
const result = await JsonCleanerService.processResponseAsync(malformedJson);
console.log(result.cleanedJson);
// Legacy sync method (still works)
const cleaned = JsonCleanerService.processResponse(malformedJson);
// Advanced: Quick clean with automatic recipe selection
const result = await JsonCleanerFactory.quickClean(malformedJson);
console.log('Success:', result.success);
console.log('Confidence:', result.confidence);
console.log('Changes:', result.totalChanges);Features:
- π― Automatic strategy selection (Conservative/Aggressive/Adaptive)
- π§ Modular detectors & fixers for specific problems
- β¨ Extracts JSON from Markdown/Think-Tags
- π Checkpoint/Rollback support for safe repairs
- π Detailed metrics (confidence, quality, performance)
- π‘οΈ Fallback to legacy system for compatibility
Available Templates:
import { RecipeTemplates } from '@loonylabs/llm-middleware';
const conservativeRecipe = RecipeTemplates.conservative();
const aggressiveRecipe = RecipeTemplates.aggressive();
const adaptiveRecipe = RecipeTemplates.adaptive();See Recipe System Documentation for details.
π Request Formatting (FlatFormatter & RequestFormatterService)
For simple data: Use FlatFormatter
const flat = FlatFormatter.flatten({ name: 'Alice', age: 30 });For complex nested prompts: Use RequestFormatterService
import { RequestFormatterService } from '@loonylabs/llm-middleware';
const prompt = {
context: { genre: 'sci-fi', tone: 'dark' },
instruction: 'Write an opening'
};
const formatted = RequestFormatterService.formatUserMessage(
prompt, (s) => s, 'MyUseCase'
);
// Outputs: ## CONTEXT:\ngenre: sci-fi\ntone: dark\n\n## INSTRUCTION:\nWrite an openingSee Request Formatting Guide for details.
π Performance Monitoring & Metrics
Automatic performance tracking with UseCaseMetricsLoggerService:
// Automatically logged for all use cases:
// - Execution time
// - Token usage (input/output)
// - Generation speed (tokens/sec)
// - Parameters usedMetrics appear in console logs:
β
Completed AI use case [MyUseCase = phi3:mini] SUCCESS
Time: 2.5s | Input: 120 tokens | Output: 85 tokens | Speed: 34.0 tokens/sec
See Performance Monitoring Guide for advanced usage.
π Comprehensive Logging
Multi-level logging with contextual metadata:
import { logger } from '@loonylabs/llm-middleware';
logger.info('Operation completed', {
context: 'MyService',
metadata: { userId: 123, duration: 150 }
});βοΈ Model Configuration
Flexible model management:
import { getModelConfig } from '@loonylabs/llm-middleware';
// MODEL1_NAME is required in .env or will throw error
const config = getModelConfig('MODEL1');
console.log(config.name); // Value from MODEL1_NAME env variable
console.log(config.baseUrl); // Value from MODEL1_URL or default localhostπ§ Customizing Model Configuration (New in v2.3.0)
Override the model configuration provider to use your own custom model configurations:
Use Cases:
- Multi-environment deployments (dev, staging, production)
- Dynamic model selection based on runtime conditions
- Loading model configs from external sources (database, API)
- Testing with different model configurations
New Pattern (Recommended):
import { BaseAIUseCase, ModelConfigKey, ValidatedLLMModelConfig } from '@loonylabs/llm-middleware';
// Define your custom model configurations
const MY_CUSTOM_MODELS: Record<string, ValidatedLLMModelConfig> = {
'PRODUCTION_MODEL': {
name: 'llama3.2:latest',
baseUrl: 'http://production-server.com:11434',
temperature: 0.7
},
'DEVELOPMENT_MODEL': {
name: 'llama3.2:latest',
baseUrl: 'http://localhost:11434',
temperature: 0.9
}
};
class MyCustomUseCase extends BaseAIUseCase<string, MyRequest, MyResult> {
// Override this method to provide custom model configurations
protected getModelConfigProvider(key: ModelConfigKey): ValidatedLLMModelConfig {
const config = MY_CUSTOM_MODELS[key];
if (!config?.name) {
throw new Error(`Model ${key} not found`);
}
return config;
}
// ... rest of your use case implementation
}Environment-Aware Example:
class EnvironmentAwareUseCase extends BaseAIUseCase<string, MyRequest, MyResult> {
protected getModelConfigProvider(key: ModelConfigKey): ValidatedLLMModelConfig {
const env = process.env.NODE_ENV || 'development';
// Automatically select model based on environment
const modelKey = env === 'production' ? 'PRODUCTION_MODEL' :
env === 'staging' ? 'STAGING_MODEL' :
'DEVELOPMENT_MODEL';
return MY_CUSTOM_MODELS[modelKey];
}
}Old Pattern (Still Supported):
// Legacy approach - still works but not recommended
class LegacyUseCase extends BaseAIUseCase<string, MyRequest, MyResult> {
protected get modelConfig(): ValidatedLLMModelConfig {
return myCustomGetModelConfig(this.modelConfigKey);
}
}See the Custom Config Example for a complete working implementation.
ποΈ Parameter Configuration
LLM-middleware provides fine-grained control over model parameters to optimize output for different use cases:
import { BaseAIUseCase, ModelParameterOverrides } from '@loonylabs/llm-middleware';
class MyUseCase extends BaseAIUseCase<MyRequest, MyResult> {
protected getParameterOverrides(): ModelParameterOverrides {
return {
temperatureOverride: 0.8, // Control creativity vs. determinism
repeatPenalty: 1.3, // Reduce word repetition
frequencyPenalty: 0.2, // Penalize frequent words
presencePenalty: 0.2, // Encourage topic diversity
topP: 0.92, // Nucleus sampling threshold
topK: 60, // Vocabulary selection limit
repeatLastN: 128 // Context window for repetition
};
}
}Parameter Levels:
- Global defaults: Set in
ModelParameterManagerService - Use-case level: Override via
getParameterOverrides()method - Request level: Pass parameters directly in requests
Available Presets:
import { ModelParameterManagerService } from '@loonylabs/llm-middleware';
// Use curated presets for common use cases
const creativeParams = ModelParameterManagerService.getDefaultParametersForType('creative_writing');
const factualParams = ModelParameterManagerService.getDefaultParametersForType('factual');
const poeticParams = ModelParameterManagerService.getDefaultParametersForType('poetic');
const dialogueParams = ModelParameterManagerService.getDefaultParametersForType('dialogue');
const technicalParams = ModelParameterManagerService.getDefaultParametersForType('technical');
const marketingParams = ModelParameterManagerService.getDefaultParametersForType('marketing');Presets Include:
- π Creative Writing: Novels, stories, narrative fiction
- π Factual: Reports, documentation, journalism
- π Poetic: Poetry, lyrics, artistic expression
- π¬ Dialogue: Character dialogue, conversational content
- π§ Technical: Code documentation, API references
- π’ Marketing: Advertisements, promotional content
For detailed documentation about all parameters, value ranges, and preset configurations, see: Provider Parameters Guide (Ollama-specific)
π¦ Configurable Response Processing
Starting in v2.8.0, you can customize how responses are processed with ResponseProcessingOptions:
interface ResponseProcessingOptions {
extractThinkTags?: boolean; // default: true
extractMarkdown?: boolean; // default: true
validateJson?: boolean; // default: true
cleanJson?: boolean; // default: true
recipeMode?: 'conservative' | 'aggressive' | 'adaptive';
}Override getResponseProcessingOptions() to customize processing:
// Plain text response (compression, summarization)
class CompressEntityUseCase extends BaseAIUseCase {
protected getResponseProcessingOptions(): ResponseProcessingOptions {
return {
extractThinkTags: true, // YES: Extract <think> tags
extractMarkdown: true, // YES: Extract markdown blocks
validateJson: false, // NO: Skip JSON validation
cleanJson: false // NO: Skip JSON cleaning
};
}
}
// Keep think tags in content
class DebugUseCase extends BaseAIUseCase {
protected getResponseProcessingOptions(): ResponseProcessingOptions {
return {
extractThinkTags: false // Keep <think> tags visible
};
}
}
// Conservative JSON cleaning
class StrictJsonUseCase extends BaseAIUseCase {
protected getResponseProcessingOptions(): ResponseProcessingOptions {
return {
recipeMode: 'conservative' // Minimal JSON fixes
};
}
}You can also use ResponseProcessorService directly:
import { ResponseProcessorService, ResponseProcessingOptions } from '@loonylabs/llm-middleware';
// Plain text (no JSON processing)
const result = await ResponseProcessorService.processResponseAsync(response, {
validateJson: false,
cleanJson: false
});
// Extract markdown but skip JSON
const result2 = await ResponseProcessorService.processResponseAsync(response, {
extractMarkdown: true,
validateJson: false
});- β Plain text responses: Compression, summarization, text generation
- β Pre-validated JSON: Skip redundant validation
- β Debug/analysis: Keep think tags in content
- β Performance: Skip unnecessary processing steps
- β Custom workflows: Mix and match extraction features
All options are optional with sensible defaults. Existing code works without changes:
// Still works exactly as before
const result = await ResponseProcessorService.processResponseAsync(response);We welcome contributions! Please see our Contributing Guidelines for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for the amazing local LLM platform
- The open-source community for inspiration and contributions