The official Node/TypeScript SDK for Freeplay
The ops platform for enterprise AI engineering teams
Docs • Quick Start • SDK Setup • API Reference • Changelog • Contributing
Freeplay is the only platform your team needs to manage the end-to-end AI application development lifecycle. It provides an integrated workflow for improving your AI agents and other generative AI products. Engineers, data scientists, product managers, designers, and subject matter experts can all review production logs, curate datasets, experiment with changes, create and run evaluations, and deploy updates.
Use this SDK to integrate with Freeplay's core capabilities:
- Observability
- Sessions — group related interactions together, e.g. for multi-turn chat or complex agent interactions
- Traces — track multi-step agent workflows within sessions
- Completions — record LLM interactions for observability and debugging
- Customer Feedback — append user feedback and events to traces and completions
- Prompts — version, format, and fetch prompt templates across environments
- Test Runs — execute evaluation runs against prompts and datasets
- Node.js 18 or higher
- A Freeplay account + API key
npm install freeplayimport Freeplay from "freeplay";
import OpenAI from "openai";
const fpClient = new Freeplay({
freeplayApiKey: process.env.FREEPLAY_API_KEY,
});
const openai = new OpenAI();
const projectId = process.env.FREEPLAY_PROJECT_ID;
// Fetch a prompt from Freeplay
const formattedPrompt = await fpClient.prompts.getFormatted({
projectId,
templateName: "my-prompt",
environment: "prod",
variables: { user_input: "Hello, world!" },
});
// Call your LLM provider with formattedPrompt.llmPrompt
const response = await openai.chat.completions.create({
model: formattedPrompt.promptInfo.model,
messages: formattedPrompt.llmPrompt,
});
// Record the interaction for observability
await fpClient.recordings.create({
projectId,
allMessages: formattedPrompt.allMessages({
role: "assistant",
content: response.choices[0].message.content,
}),
});See the SDK Setup guide for complete examples.
export FREEPLAY_API_KEY="fp_..."
export FREEPLAY_PROJECT_ID="xy..."
# Optional: override if using a custom domain / private deployment
export FREEPLAY_API_BASE="https://app.freeplay.ai/api"API base URL
Default: https://app.freeplay.ai/api
Custom domain/private deployment: https://<your-domain>/api
Merge Semantics: New keys overwrite existing keys, preserving unmentioned keys.
The SDK supports multiple tool schema formats for different LLM providers. Tool schemas are passed as plain objects in the provider's native format.
GenAI uses a unique structure where a single tool contains multiple function declarations:
// Tool schema as a plain object (GenAI/Vertex format)
const toolSchema = [
{
functionDeclarations: [
{
name: "get_weather",
description: "Get the current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
units: {
type: "string",
enum: ["celsius", "fahrenheit"],
},
},
required: ["location"],
},
},
// Multiple functions can be in a single tool (GenAI-specific)
{
name: "get_news",
description: "Get latest news",
parameters: {
type: "object",
properties: {
topic: { type: "string" },
},
required: ["topic"],
},
},
],
},
];
await freeplay.recordings.create({
projectId,
allMessages: [...],
toolSchema,
callInfo: { provider: "vertex", model: "gemini-2.0-flash" },
});const toolSchema = [
{
type: "function",
function: {
name: "get_weather",
description: "Get weather information",
parameters: {
type: "object",
properties: {
location: { type: "string" },
},
required: ["location"],
},
},
},
];const toolSchema = [
{
name: "get_weather",
description: "Get weather information",
input_schema: {
type: "object",
properties: {
location: { type: "string" },
},
required: ["location"],
},
},
];Note: All formats are backward compatible. The backend automatically normalizes tool schemas regardless of format. Tool schemas should be passed as-is from the provider SDK (e.g., @google/generative-ai, openai, @anthropic-ai/sdk), similar to how messages are handled.
See the Freeplay Docs for more usage examples and the API reference.
For comprehensive documentation and examples, visit docs.freeplay.ai.
This SDK follows Semantic Versioning (SemVer): MAJOR.MINOR.PATCH.
- PATCH: bug fixes
- MINOR: backward-compatible features
- MAJOR: breaking changes
Before upgrading major versions, review the changelog.
# Install dependencies
npm run safe-install- Docs: https://docs.freeplay.ai
- Issues: https://github.com/freeplayai/freeplay-node/issues
- Security: security@freeplay.ai
See CONTRIBUTING.md.
The SDK includes an interactive REPL for quick testing and development:
# 1. Create .env file (copy from .env.example)
cp .env.example .env
# Edit .env with your API keys
# 2. Start REPL
# Production mode (default) - connects to app.freeplay.ai
npm run repl
# Local development mode - connects to localhost:8000 with SSL bypass
npm run repl -- --localThe REPL provides:
- Pre-initialized
client(Freeplay instance) - Environment variables:
projectId,sessionId,datasetId,apiBase - Tab completion and syntax highlighting
Example REPL usage:
freeplay> await client.recordings.create({
projectId,
allMessages: [
{ role: 'user', content: 'Hello!' },
{ role: 'assistant', content: 'Hi there!' }
],
callInfo: { provider: 'openai', model: 'gpt-4' }
});See REPL.md for detailed documentation.
Apache-2.0 — see LICENSE.