A Typed Runtime for LLM Orchestration
build: passing · tests: passing · license: MIT · node: >=18
"We are not writing text; we are defining the topology of a thought process."
L-Script treats LLMs as typed, non-deterministic processors that require formal interfaces. Instead of ad-hoc string prompting, you define typed functions with Zod schemas that compile to structured API calls with automatic validation, retry logic, and provider abstraction.
Three pillars:
| Pillar | What it does |
|---|---|
| Schema Enforcement | Every LLM call validates output against a Zod schema. If the model drifts, the runtime catches it and retries. |
| Context Management | ContextStack manages conversation history with automatic FIFO or summarization pruning at token limits. |
| Model Agnosticism | Write logic once; swap providers (OpenAI, Anthropic, Gemini, Ollama) without changing function definitions. |
import { z } from "zod";
import { LScriptRuntime, OpenAIProvider } from "@sschepis/lmscript";
import type { LScriptFunction } from "@sschepis/lmscript";
// 1. Define your output schema
const AnalysisSchema = z.object({
sentiment: z.enum(["positive", "negative", "neutral"]),
summary: z.string(),
action_items: z.array(z.string()),
});
// 2. Define the LLM function
const AnalyzeFeedback: LScriptFunction<string, typeof AnalysisSchema> = {
name: "AnalyzeFeedback",
model: "gpt-4o",
system: "You are a Senior Product Manager.",
prompt: (text) => `Review this customer feedback:\n${text}`,
schema: AnalysisSchema,
temperature: 0.3,
};
// 3. Execute with full type safety
const runtime = new LScriptRuntime({
provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
});
const result = await runtime.execute(AnalyzeFeedback, feedbackText);
// result.data is fully typed: { sentiment, summary, action_items }npm install lmscriptEvery LLM call is an LScriptFunction — a typed object with a name, model, system prompt, prompt template, and a Zod schema for output validation.
const MyFunction: LScriptFunction<string, typeof MySchema> = {
name: "MyFunction",
model: "gpt-4o",
system: "You are an expert analyst.",
prompt: (input) => `Analyze: ${input}`,
schema: MySchema,
temperature: 0.3,
maxRetries: 3,
};Chain multiple LLM functions sequentially with Pipeline.from(fn1).pipe(fn2). The output of each step becomes the input to the next.
import { Pipeline } from "@sschepis/lmscript";
const pipeline = Pipeline.from(ExtractFacts).pipe(Summarize).pipe(GenerateReport);
const result = await pipeline.run(runtime, rawData);
// result.finalData — output of the last step
// result.steps — results from each step
// result.totalUsage — aggregated token usageAdd optional examples to any function for in-context learning:
const ClassifyEmail: LScriptFunction<string, typeof ClassifySchema> = {
name: "ClassifyEmail",
model: "gpt-4o",
system: "Classify emails by intent.",
prompt: (email) => `Classify this email:\n${email}`,
schema: ClassifySchema,
examples: [
{ input: "I want a refund", output: { intent: "refund", priority: "high" } },
{ input: "Thanks for the update!", output: { intent: "acknowledgment", priority: "low" } },
],
};ContextStack manages conversation history with configurable token limits and pruning strategies:
import { ContextStack } from "@sschepis/lmscript";
const ctx = new ContextStack({ maxTokens: 4096, pruneStrategy: "fifo" });
ctx.push({ role: "user", content: "Hello" });
ctx.push({ role: "assistant", content: "Hi there!" });
ctx.getMessages(); // ChatMessage[]
ctx.getTokenCount(); // estimated token countThe "summarize" strategy accepts a custom summarizer function for intelligent pruning.
executeStream() returns a StreamResult with partial tokens as they arrive:
const { stream, result } = await runtime.executeStream(MyFunction, input);
for await (const token of stream) {
process.stdout.write(token); // partial tokens
}
const final = await result; // validated ExecutionResult<T>Attach tools to any function. The runtime automatically executes tool calls and re-prompts the LLM with results:
const LookupTool: ToolDefinition = {
name: "lookup_user",
description: "Look up a user by ID",
parameters: z.object({ userId: z.string() }),
execute: async ({ userId }) => db.users.findById(userId),
};
const MyFunction: LScriptFunction<string, typeof Schema> = {
name: "WithTools",
model: "gpt-4o",
system: "You are a helpful assistant.",
prompt: (q) => q,
schema: Schema,
tools: [LookupTool],
};MiddlewareManager provides lifecycle hooks for cross-cutting concerns:
import { MiddlewareManager } from "@sschepis/lmscript";
const middleware = new MiddlewareManager();
middleware.use({
onBeforeExecute: (ctx) => console.log(`Starting ${ctx.fn.name}`),
onAfterValidation: (ctx, result) => console.log("Validated:", result),
onRetry: (ctx, error) => console.warn(`Retrying: ${error.message}`),
onError: (ctx, error) => console.error(`Failed: ${error.message}`),
onComplete: (ctx, result) => console.log(`Done in ${result.attempts} attempts`),
});
const runtime = new LScriptRuntime({ provider, middleware });ExecutionCache with MemoryCacheBackend memoizes LLM responses with TTL support:
import { ExecutionCache, MemoryCacheBackend } from "@sschepis/lmscript";
const cache = new ExecutionCache(new MemoryCacheBackend(), {
defaultTtlMs: 60_000, // 1 minute TTL
});
const runtime = new LScriptRuntime({ provider, cache });
// Identical inputs will return cached resultsCostTracker monitors token usage with per-function breakdowns and budget limits:
import { CostTracker } from "@sschepis/lmscript";
const costTracker = new CostTracker({
"gpt-4o": { inputPer1k: 0.005, outputPer1k: 0.015 },
});
const runtime = new LScriptRuntime({
provider,
costTracker,
budget: { maxTotalCost: 1.00, maxTotalTokens: 100_000 },
});
// After executions:
costTracker.getTotalCost(); // total USD spent
costTracker.getUsageByFunction(); // per-function breakdownLogger with transports, spans, and log levels for execution tracing:
import { Logger, ConsoleTransport, LogLevel } from "@sschepis/lmscript";
const logger = new Logger({
level: LogLevel.DEBUG,
transports: [new ConsoleTransport()],
});
const span = logger.startSpan("my-operation");
span.info("Processing started");
span.end();
const runtime = new LScriptRuntime({ provider, logger });The lsc command-line tool for working with L-Script files:
lsc compile <file.ls> # Compile .ls file and print JSON manifest
lsc list <file.ls> # List all functions defined in a file
lsc validate <file.ls> # Validate syntax without compiling
lsc parse <file.ls> # Parse and print ASTOpenAIProvider works with OpenAI and any OpenAI-compatible API:
import { OpenAIProvider } from "@sschepis/lmscript";
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!,
baseUrl: "https://api.openai.com/v1", // or any compatible endpoint
});AnthropicProvider for Claude models:
import { AnthropicProvider } from "@sschepis/lmscript";
const provider = new AnthropicProvider({
apiKey: process.env.ANTHROPIC_API_KEY!,
});GeminiProvider for Gemini models:
import { GeminiProvider } from "@sschepis/lmscript";
const provider = new GeminiProvider({
apiKey: process.env.GEMINI_API_KEY!,
});OllamaProvider for local models via Ollama:
import { OllamaProvider } from "@sschepis/lmscript";
const provider = new OllamaProvider({
apiKey: "unused", // Ollama doesn't require API keys
baseUrl: "http://localhost:11434",
});ModelRouter routes requests to different providers based on pattern-matched rules:
import { ModelRouter } from "@sschepis/lmscript";
const router = new ModelRouter([
{ pattern: /^gpt-/, provider: openaiProvider },
{ pattern: /^claude-/, provider: anthropicProvider },
{ pattern: /^gemini-/, provider: geminiProvider },
{ pattern: /.*/, provider: ollamaProvider }, // fallback
]);
const runtime = new LScriptRuntime({ provider: router });FallbackProvider automatically fails over to the next provider on error:
import { FallbackProvider } from "@sschepis/lmscript";
const provider = new FallbackProvider([
openaiProvider,
anthropicProvider,
ollamaProvider,
]);
// If OpenAI fails, tries Anthropic, then OllamaMockProvider with request recording and pattern-matched responses for unit testing:
import { MockProvider, createMockProvider } from "@sschepis/lmscript";
const mock = createMockProvider([
{ input: /feedback/, output: { sentiment: "positive", summary: "Great!" } },
{ output: { fallback: true } }, // default response
]);
const runtime = new LScriptRuntime({ provider: mock });
// mock.getRecordedRequests() — inspect all requests madediffSchemaResult() provides field-by-field validation diffs for testing LLM output:
import { diffSchemaResult, formatSchemaDiff } from "@sschepis/lmscript";
const diff = diffSchemaResult(schema, actualOutput);
if (diff.length > 0) {
console.log(formatSchemaDiff(diff));
// Shows exactly which fields failed validation and why
}captureSnapshot() / compareSnapshots() for prompt regression testing:
import { captureSnapshot, compareSnapshots, formatSnapshotDiff } from "@sschepis/lmscript";
const baseline = captureSnapshot(myFunction, testInput);
// ... make changes ...
const current = captureSnapshot(myFunction, testInput);
const diff = compareSnapshots(baseline, current);
if (diff.changed) {
console.log(formatSnapshotDiff(diff));
}ChaosProvider and generateFuzzInputs() for resilience testing:
import { ChaosProvider, generateFuzzInputs } from "@sschepis/lmscript";
// Wrap a provider to inject random failures
const chaos = new ChaosProvider(realProvider, {
errorRate: 0.3, // 30% chance of error
latencyMs: [100, 2000], // random latency range
malformedRate: 0.1, // 10% chance of malformed JSON
});
// Generate fuzz inputs for a schema
const fuzzInputs = generateFuzzInputs(MySchema, 50);executeAll() and executeBatch() with concurrency control:
// Execute multiple functions in parallel
const results = await runtime.executeAll([
{ fn: AnalyzeSentiment, input: text },
{ fn: ExtractEntities, input: text },
{ fn: ClassifyTopic, input: text },
]);
// results.tasks — individual results
// results.successCount, results.failureCount
// Batch execution with concurrency limit
const batchResults = await runtime.executeBatch(
items.map((item) => ({ fn: ProcessItem, input: item })),
{ concurrency: 5 }
);Session manages multi-turn conversations with automatic history tracking:
import { Session, ContextStack } from "@sschepis/lmscript";
const session = new Session(
runtime,
ChatFunction,
new ContextStack({ maxTokens: 8192, pruneStrategy: "fifo" })
);
const r1 = await session.send("What is TypeScript?");
const r2 = await session.send("How does it compare to JavaScript?");
// r2 has full conversation context from r1
session.getHistory(); // full ChatMessage[] history
session.getTokenCount(); // current token usage
session.clearHistory(); // reset the sessionexecuteWithTransform() with composable OutputTransformer functions:
import {
withTransform,
dateStringTransformer,
trimStringsTransformer,
composeTransformers,
} from "@sschepis/lmscript";
// Apply a single transformer
const result = await runtime.executeWithTransform(
withTransform(MyFunction, trimStringsTransformer)
);
// Compose multiple transformers (applied left-to-right)
const composed = composeTransformers(
trimStringsTransformer,
dateStringTransformer,
);Parse .ls files into typed LScriptFunction objects with the built-in DSL compiler:
// security-review.ls
type Critique = {
score: number(min=1, max=10),
vulnerabilities: string[],
suggested_fix: string
}
llm SecurityReviewer(code: string) -> Critique {
model: "gpt-4o"
temperature: 0.2
system: "You are a senior security researcher. Be pedantic and skeptical."
prompt:
"""
Review the following function for security flaws:
{{code}}
"""
}Use programmatically or via CLI:
import { compileFile } from "@sschepis/lmscript";
const module = await compileFile("./security-review.ls");
// module.functions — Map of compiled LScriptFunction objects
// module.types — Map of compiled Zod schemaslsc compile security-review.ls # Compile and print JSON manifest
lsc parse security-review.ls # Print AST| Class / Function | Description |
|---|---|
LScriptRuntime |
Core runtime — execute(), executeStream(), executeAll(), executeBatch(), executeWithTransform(), executeWithHistory() |
LScriptFunction<I, O> |
Typed LLM function definition with schema, prompt template, and options |
Pipeline |
Sequential multi-step pipeline — Pipeline.from(fn).pipe(fn2) |
Session |
Multi-turn conversational session with context tracking |
ContextStack |
Managed conversation history with token limits and pruning |
MiddlewareManager |
Lifecycle hooks — onBeforeExecute, onAfterValidation, onRetry, onError, onComplete |
ExecutionCache |
Response caching with pluggable backends and TTL |
MemoryCacheBackend |
In-memory cache backend |
CostTracker |
Token usage tracking with budget enforcement |
Logger |
Structured logging with transports and spans |
OpenAIProvider |
OpenAI / OpenAI-compatible provider |
AnthropicProvider |
Anthropic Claude provider |
GeminiProvider |
Google Gemini provider |
OllamaProvider |
Ollama local model provider |
ModelRouter |
Pattern-based model routing |
FallbackProvider |
Automatic provider failover chain |
MockProvider |
Mock provider for testing with request recording |
diffSchemaResult() |
Field-by-field schema validation diff |
captureSnapshot() |
Prompt snapshot capture for regression testing |
compareSnapshots() |
Snapshot comparison with detailed diffs |
ChaosProvider |
Chaos testing provider — injects errors, latency, malformed responses |
generateFuzzInputs() |
Fuzz input generator for schema-aware testing |
compile() / compileFile() |
L-Script DSL compiler |
withTransform() |
Output transformer wrapper |
composeTransformers() |
Compose multiple output transformers |
┌──────────────────────────────────────────────────────────────┐
│ L-Script DSL │
│ .ls files → Lexer → Parser → AST │
└──────────────────────┬───────────────────────────────────────┘
│ compile()
▼
┌──────────────────────────────────────────────────────────────┐
│ LScriptFunction<I, O> │
│ name · model · system · prompt · schema │
│ examples · tools · temperature · maxRetries │
└──────────────────────┬───────────────────────────────────────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Pipeline │ │ Session │ │ Parallel │
│ .pipe() │ │ .send() │ │ execAll │
└────┬─────┘ └────┬─────┘ └────┬─────┘
└────────────┬────────────┘
▼
┌──────────────────────────────────────────────────────────────┐
│ LScriptRuntime │
│ middleware → cache → cost tracking → logger → validation │
└──────────────────────┬───────────────────────────────────────┘
│
┌────────┼────────┐
▼ ▼ ▼
┌──────────┐ ┌─────┐ ┌──────────┐
│ Router │ │ FB │ │ Direct │
│(pattern) │ │Chain│ │ Provider │
└────┬─────┘ └──┬──┘ └────┬─────┘
└───────────┼────────┘
▼
┌──────────────────────────────────────────────────────────────┐
│ LLM Providers │
│ OpenAI · Anthropic · Gemini · Ollama · Mock │
└──────────────────────────────────────────────────────────────┘
npm install # Install dependencies
npm run build # Compile TypeScript
npm test # Run tests
npm run dev # Watch modenpm run cli:compile -- <file.ls> # Compile .ls file
npm run cli:list -- <file.ls> # List functions
npm run cli:validate -- <file.ls> # Validate syntax
npm run cli:parse -- <file.ls> # Print ASTexport OPENAI_API_KEY=sk-...
npm run example:securityMIT