TensorZero is an open-source stack for industrial-grade LLM applications:
- Gateway: access every LLM provider through a unified API, built for performance (<1ms p99 latency)
- Observability: store inferences and feedback in your database, available programmatically or in the UI
- Optimization: collect metrics and human feedback to optimize prompts, models, and inference strategies
- Evaluations: benchmark individual inferences or end-to-end workflows using heuristics, LLM judges, etc.
- Experimentation: ship with confidence with built-in A/B testing, routing, fallbacks, retries, etc.
Take what you need, adopt incrementally, and complement with other tools.
Website
Β·
Docs
Β·
Twitter
Β·
Slack
Β·
Discord
Quick Start (5min)
Β·
Deployment Guide
Β·
API Reference
Β·
Configuration Reference
What is TensorZero? | TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluations, and experimentation. |
How is TensorZero different from other LLM frameworks? |
1. TensorZero enables you to optimize complex LLM applications based on production metrics and human feedback. 2. TensorZero supports the needs of industrial-grade LLM applications: low latency, high throughput, type safety, self-hosted, GitOps, customizability, etc. 3. TensorZero unifies the entire LLMOps stack, creating compounding benefits. For example, LLM evaluations can be used for fine-tuning models alongside AI judges. |
Can I use TensorZero with ___? | Yes. Every major programming language is supported. You can use TensorZero with our Python client, any OpenAI SDK or OpenAI-compatible client, or our HTTP API. |
Is TensorZero production-ready? | Yes. Here's a case study: Automating Code Changelogs at a Large Bank with LLMs |
How much does TensorZero cost? | Nothing. TensorZero is 100% self-hosted and open-source. There are no paid features. |
Who is building TensorZero? | Our technical team includes a former Rust compiler maintainer, machine learning researchers (Stanford, CMU, Oxford, Columbia) with thousands of citations, and the chief product officer of a decacorn startup. We're backed by the same investors as leading open-source projects (e.g. ClickHouse, CockroachDB) and AI labs (e.g. OpenAI, Anthropic). |
How do I get started? | You can adopt TensorZero incrementally. Our Quick Start goes from a vanilla OpenAI wrapper to a production-ready LLM application with observability and fine-tuning in just 5 minutes. |
Integrate with TensorZero once and access every major LLM provider.
- Access every major LLM provider (API or self-hosted) through a single unified API
- Infer with streaming, tool use, structured generation (JSON mode), batch, multimodal (VLMs), file inputs, caching, etc.
- Define prompt templates and schemas to enforce a consistent, typed interface between your application and the LLMs
- Satisfy extreme throughput and latency needs, thanks to Rust: <1ms p99 latency overhead at 10k+ QPS
- Integrate using our Python client, any OpenAI SDK or OpenAI-compatible client, or our HTTP API (use any programming language)
- Ensure high availability with routing, retries, fallbacks, load balancing, granular timeouts, etc.
- Soon: embeddings; real-time voice
Model Providers | Features |
The TensorZero Gateway natively supports:
Need something else? Your provider is most likely supported because TensorZero integrates with any OpenAI-compatible API (e.g. Ollama). |
The TensorZero Gateway supports advanced features like:
The TensorZero Gateway is written in Rust π¦ with performance in mind (<1ms p99 latency overhead @ 10k QPS).
See Benchmarks. You can run inference using the TensorZero client (recommended), the OpenAI client, or the HTTP API. |
Usage: Python β TensorZero Client (Recommended)
You can access any provider using the TensorZero Python client.
pip install tensorzero
- Optional: Set up the TensorZero configuration.
- Run inference:
from tensorzero import TensorZeroGateway # or AsyncTensorZeroGateway
with TensorZeroGateway.build_embedded(clickhouse_url="...", config_file="...") as client:
response = client.inference(
model_name="openai::gpt-4o-mini",
# Try other providers easily: "anthropic::claude-3-7-sonnet-20250219"
input={
"messages": [
{
"role": "user",
"content": "Write a haiku about artificial intelligence.",
}
]
},
)
See Quick Start for more information.
Usage: Python β OpenAI Client
You can access any provider using the OpenAI Python client with TensorZero.
pip install tensorzero
- Optional: Set up the TensorZero configuration.
- Run inference:
from openai import OpenAI # or AsyncOpenAI
from tensorzero import patch_openai_client
client = OpenAI()
patch_openai_client(
client,
clickhouse_url="http://chuser:chpassword@localhost:8123/tensorzero",
config_file="config/tensorzero.toml",
async_setup=False,
)
response = client.chat.completions.create(
model="tensorzero::model_name::openai::gpt-4o-mini",
# Try other providers easily: "tensorzero::model_name::anthropic::claude-3-7-sonnet-20250219"
messages=[
{
"role": "user",
"content": "Write a haiku about artificial intelligence.",
}
],
)
See Quick Start for more information.
Usage: JavaScript / TypeScript (Node) β OpenAI Client
You can access any provider using the OpenAI Node client with TensorZero.
- Deploy
tensorzero/gateway
using Docker. Detailed instructions β - Set up the TensorZero configuration.
- Run inference:
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:3000/openai/v1",
});
const response = await client.chat.completions.create({
model: "tensorzero::model_name::openai::gpt-4o-mini",
// Try other providers easily: "tensorzero::model_name::anthropic::claude-3-7-sonnet-20250219"
messages: [
{
role: "user",
content: "Write a haiku about artificial intelligence.",
},
],
});
See Quick Start for more information.
Usage: Other Languages & Platforms β HTTP API
TensorZero supports virtually any programming language or platform via its HTTP API.
- Deploy
tensorzero/gateway
using Docker. Detailed instructions β - Optional: Set up the TensorZero configuration.
- Run inference:
curl -X POST "http://localhost:3000/inference" \
-H "Content-Type: application/json" \
-d '{
"model_name": "openai::gpt-4o-mini",
"input": {
"messages": [
{
"role": "user",
"content": "Write a haiku about artificial intelligence."
}
]
}
}'
See Quick Start for more information.
Zoom in to debug individual API calls, or zoom out to monitor metrics across models and prompts over time β all using the open-source TensorZero UI.
- Store inferences and feedback (metrics, human edits, etc.) in your own database
- Dive into individual inferences or high-level aggregate patterns using the TensorZero UI or programmatically
- Build datasets for optimization, evaluations, and other workflows
- Replay historical inferences with new prompts, models, inference strategies, etc.
- Export OpenTelemetry (OTLP) traces to your favorite general-purpose observability tool
- Soon: AI-assisted debugging and root cause analysis; AI-assisted data labeling
Observability Β» Inference | Observability Β» Function |
![]() |
![]() |
Send production metrics and human feedback to easily optimize your prompts, models, and inference strategies β using the UI or programmatically.
- Optimize your models with supervised fine-tuning, RLHF, and other techniques
- Optimize your prompts with automated prompt engineering algorithms like MIPROv2
- Optimize your inference strategy with dynamic in-context learning, chain of thought, best/mixture-of-N sampling, etc.
- Enable a feedback loop for your LLMs: a data & learning flywheel turning production data into smarter, faster, and cheaper models
- Soon: programmatic optimization; synthetic data generation
Optimize closed-source and open-source models using supervised fine-tuning (SFT) and preference fine-tuning (DPO).
Supervised Fine-tuning β UI | Preference Fine-tuning (DPO) β Jupyter Notebook |
![]() |
![]() |
Boost performance by dynamically updating your prompts with relevant examples, combining responses from multiple inferences, and more.
Best-of-N Sampling | Mixture-of-N Sampling |
![]() |
![]() |
Dynamic In-Context Learning (DICL) | Chain-of-Thought (CoT) |
![]() |
![]() |
More coming soon...
Optimize your prompts programmatically using research-driven optimization techniques.
MIPROv2 | DSPy Integration |
![]() |
TensorZero comes with several optimization recipes, but you can also easily create your own. This example shows how to optimize a TensorZero function using an arbitrary tool β here, DSPy, a popular library for automated prompt engineering. |
More coming soon...
Compare prompts, models, and inference strategies using TensorZero Evaluations β with support for heuristics and LLM judges.
- Evaluate individual inferences with static evaluations powered by heuristics or LLM judges (β unit tests for LLMs)
- Evaluate end-to-end workflows with dynamic evaluations with complete flexibility (β integration tests for LLMs)
- Optimize LLM judges just like any other TensorZero function to align them to human preferences
- Soon: more built-in evaluators; headless evaluations
Ship with confidence with built-in A/B testing, routing, fallbacks, retries, etc.
- Ship with confidence with built-in A/B testing for models, prompts, providers, hyperparameters, etc.
- Enforce principled experiments (RCTs) in complex workflows, including multi-turn and compound LLM systems
- Soon: multi-armed bandits; AI-managed experiments
Build with an open-source stack well-suited for prototypes but designed from the ground up to support the most complex LLM applications and deployments.
- Build simple applications or massive deployments with GitOps-friendly orchestration
- Extend TensorZero with built-in escape hatches, programmatic-first usage, direct database access, and more
- Integrate with third-party tools: specialized observability and evaluations, model providers, agent orchestration frameworks, etc.
- Soon: UI playground
Watch LLMs get better at data extraction in real-time with TensorZero!
Dynamic in-context learning (DICL) is a powerful inference-time optimization available out of the box with TensorZero. It enhances LLM performance by automatically incorporating relevant historical examples into the prompt, without the need for model fine-tuning.
LLMs-get-better-at-data-extraction-in-real-time-with-TensorZero.mp4
Start building today. The Quick Start shows it's easy to set up an LLM application with TensorZero.
Questions? Ask us on Slack or Discord.
Using TensorZero at work? Email us at hello@tensorzero.com to set up a Slack or Teams channel with your team (free).
Work with us. We're hiring in NYC. We'd also welcome open-source contributions!
We are working on a series of complete runnable examples illustrating TensorZero's data & learning flywheel.
Optimizing Data Extraction (NER) with TensorZero
This example shows how to use TensorZero to optimize a data extraction pipeline. We demonstrate techniques like fine-tuning and dynamic in-context learning (DICL). In the end, an optimized GPT-4o Mini model outperforms GPT-4o on this task β at a fraction of the cost and latency β using a small amount of training data.
Agentic RAG β Multi-Hop Question Answering with LLMs
This example shows how to build a multi-hop retrieval agent using TensorZero. The agent iteratively searches Wikipedia to gather information, and decides when it has enough context to answer a complex question.
Writing Haikus to Satisfy a Judge with Hidden Preferences
This example fine-tunes GPT-4o Mini to generate haikus tailored to a specific taste. You'll see TensorZero's "data flywheel in a box" in action: better variants leads to better data, and better data leads to better variants. You'll see progress by fine-tuning the LLM multiple times.
Improving LLM Chess Ability with Best-of-N Sampling
This example showcases how best-of-N sampling can significantly enhance an LLM's chess-playing abilities by selecting the most promising moves from multiple generated options.
Improving Math Reasoning with a Custom Recipe for Automated Prompt Engineering (DSPy)
TensorZero provides a number of pre-built optimization recipes covering common LLM engineering workflows. But you can also easily create your own recipes and workflows! This example shows how to optimize a TensorZero function using an arbitrary tool β here, DSPy.
& many more on the way!