llmprogram
is a TypeScript package that provides a structured and powerful way to create and run programs that use Large Language Models (LLMs). It uses a YAML-based configuration to define the behavior of your LLM programs, making them easy to create, manage, and share.
This is the TypeScript equivalent of the Python llmprogram library.
- YAML-based Configuration: Define your LLM programs using simple and intuitive YAML files.
- Input/Output Validation: Use JSON schemas to validate the inputs and outputs of your programs, ensuring data integrity.
- Handlebars Templating: Use the power of Handlebars templates to create dynamic prompts for your LLMs.
- Caching: Built-in support for Redis caching to save time and reduce costs.
- Execution Logging: Automatically log program executions to a SQLite database for analysis and debugging.
- Streaming: Support for streaming responses from the LLM.
- Batch Processing: Process multiple inputs in parallel for improved performance.
- CLI for Dataset Generation: A command-line interface to generate instruction datasets for LLM fine-tuning from your logged data.
- Web Service: Expose your programs as REST API endpoints with automatic OpenAPI documentation.
- Analytics: Comprehensive analytics tracking with DuckDB for token usage, LLM calls, program usage, and timing metrics.
npm install llmprogram
Create a file named sentiment_analysis.yaml
:
name: sentiment_analysis
description: Analyzes the sentiment of a given text.
version: 1.0.0
model:
provider: openai
name: gpt-4.1-mini
temperature: 0.5
max_tokens: 100
response_format: json_object
system_prompt: |
You are a sentiment analysis expert. Analyze the sentiment of the given text and return a JSON response with the following format:
- sentiment (string): "positive", "negative", or "neutral"
- score (number): A score from -1 (most negative) to 1 (most positive)
input_schema:
type: object
required:
- text
properties:
text:
type: string
description: The text to analyze.
output_schema:
type: object
required:
- sentiment
- score
properties:
sentiment:
type: string
enum: ["positive", "negative", "neutral"]
score:
type: number
minimum: -1
maximum: 1
template: |
Analyze the following text:
{{text}}
import { LLMProgram } from 'llmprogram';
async function main() {
const program = new LLMProgram('sentiment_analysis.yaml');
const result = await program.call({ text: 'I love this new product! It is amazing.' });
console.log(result);
}
main();
# Set your OpenAI API key
export OPENAI_API_KEY='your-api-key'
# Run with inputs from a JSON file
llmprogram run sentiment_analysis.yaml --inputs sentiment_inputs.json
# Run with inputs from command line
llmprogram run sentiment_analysis.yaml --input-json '{"text": "I love this product!"}'
Run an LLM program with inputs from command line or files.
Usage:
# Run with inputs from a JSON file
llmprogram run program.yaml --inputs inputs.json
# Run with inputs from command line
llmprogram run program.yaml --input-json '{"text": "I love this product!"}'
# Save output to a file
llmprogram run program.yaml --inputs inputs.json --output result.json
Generate an instruction dataset for LLM fine-tuning from a SQLite log file.
Usage:
llmprogram generate-dataset <database_path> <output_path>
Show analytics data collected from LLM program executions.
Usage:
# Show all analytics data
llmprogram analytics
# Show analytics for a specific program
llmprogram analytics --program sentiment_analysis
# Show analytics for a specific model
llmprogram analytics --model gpt-4
The package includes a built-in web service that exposes your LLM programs as REST API endpoints.
# Run the web service with default settings
llmprogram-web
# Run the web service with custom directory
llmprogram-web --directory /path/to/your/programs
# Run the web service on a different host/port
llmprogram-web --host 0.0.0.0 --port 8080
GET /
- Root endpoint with API informationGET /programs
- List all available programsGET /programs/{program_name}
- Get detailed information about a specific programPOST /programs/{program_name}/run
- Run a specific programGET /analytics/llm-calls
- Get LLM call statisticsGET /analytics/program-usage
- Get program usage statisticsGET /analytics/token-usage
- Get token usage statistics