Skip to content

llmprogram is a TypeScript package that provides a structured and powerful way to create and run programs that use Large Language Models (LLMs). It uses a YAML-based configuration to define the behavior of your LLM programs, making them easy to create, manage, and share.

Notifications You must be signed in to change notification settings

llmprogram/llmprogram-ts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Program (TypeScript)

llmprogram is a TypeScript package that provides a structured and powerful way to create and run programs that use Large Language Models (LLMs). It uses a YAML-based configuration to define the behavior of your LLM programs, making them easy to create, manage, and share.

This is the TypeScript equivalent of the Python llmprogram library.

Features

  • YAML-based Configuration: Define your LLM programs using simple and intuitive YAML files.
  • Input/Output Validation: Use JSON schemas to validate the inputs and outputs of your programs, ensuring data integrity.
  • Handlebars Templating: Use the power of Handlebars templates to create dynamic prompts for your LLMs.
  • Caching: Built-in support for Redis caching to save time and reduce costs.
  • Execution Logging: Automatically log program executions to a SQLite database for analysis and debugging.
  • Streaming: Support for streaming responses from the LLM.
  • Batch Processing: Process multiple inputs in parallel for improved performance.
  • CLI for Dataset Generation: A command-line interface to generate instruction datasets for LLM fine-tuning from your logged data.
  • Web Service: Expose your programs as REST API endpoints with automatic OpenAPI documentation.
  • Analytics: Comprehensive analytics tracking with DuckDB for token usage, LLM calls, program usage, and timing metrics.

Installation

npm install llmprogram

Usage

Program YAML File

Create a file named sentiment_analysis.yaml:

name: sentiment_analysis
description: Analyzes the sentiment of a given text.
version: 1.0.0

model:
  provider: openai
  name: gpt-4.1-mini
  temperature: 0.5
  max_tokens: 100
  response_format: json_object

system_prompt: |
  You are a sentiment analysis expert. Analyze the sentiment of the given text and return a JSON response with the following format:
  - sentiment (string): "positive", "negative", or "neutral"
  - score (number): A score from -1 (most negative) to 1 (most positive)

input_schema:
  type: object
  required:
    - text
  properties:
    text:
      type: string
      description: The text to analyze.

output_schema:
  type: object
  required:
    - sentiment
    - score
  properties:
    sentiment:
      type: string
      enum: ["positive", "negative", "neutral"]
    score:
      type: number
      minimum: -1
      maximum: 1

template: |
  Analyze the following text:
  {{text}}

Using the Library

import { LLMProgram } from 'llmprogram';

async function main() {
    const program = new LLMProgram('sentiment_analysis.yaml');
    const result = await program.call({ text: 'I love this new product! It is amazing.' });
    console.log(result);
}

main();

Using the CLI

# Set your OpenAI API key
export OPENAI_API_KEY='your-api-key'

# Run with inputs from a JSON file
llmprogram run sentiment_analysis.yaml --inputs sentiment_inputs.json

# Run with inputs from command line
llmprogram run sentiment_analysis.yaml --input-json '{"text": "I love this product!"}'

CLI Commands

run

Run an LLM program with inputs from command line or files.

Usage:

# Run with inputs from a JSON file
llmprogram run program.yaml --inputs inputs.json

# Run with inputs from command line
llmprogram run program.yaml --input-json '{"text": "I love this product!"}'

# Save output to a file
llmprogram run program.yaml --inputs inputs.json --output result.json

generate-dataset

Generate an instruction dataset for LLM fine-tuning from a SQLite log file.

Usage:

llmprogram generate-dataset <database_path> <output_path>

analytics

Show analytics data collected from LLM program executions.

Usage:

# Show all analytics data
llmprogram analytics

# Show analytics for a specific program
llmprogram analytics --program sentiment_analysis

# Show analytics for a specific model
llmprogram analytics --model gpt-4

Web Service

The package includes a built-in web service that exposes your LLM programs as REST API endpoints.

Running the Web Service

# Run the web service with default settings
llmprogram-web

# Run the web service with custom directory
llmprogram-web --directory /path/to/your/programs

# Run the web service on a different host/port
llmprogram-web --host 0.0.0.0 --port 8080

API Endpoints

  • GET / - Root endpoint with API information
  • GET /programs - List all available programs
  • GET /programs/{program_name} - Get detailed information about a specific program
  • POST /programs/{program_name}/run - Run a specific program
  • GET /analytics/llm-calls - Get LLM call statistics
  • GET /analytics/program-usage - Get program usage statistics
  • GET /analytics/token-usage - Get token usage statistics

About

llmprogram is a TypeScript package that provides a structured and powerful way to create and run programs that use Large Language Models (LLMs). It uses a YAML-based configuration to define the behavior of your LLM programs, making them easy to create, manage, and share.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published