Standalone, embeddable AI chat bar with multi-provider LLM support and tool calling. Drop it into any website or framework — it handles the full LLM conversation loop, tool execution, and UI out of the box.
- Works with OpenAI, Anthropic, LM Studio, Ollama, and any OpenAI-compatible API
- Full tool-calling loop — the LLM calls your tools, gets results, and responds
- Zero dependencies — one script tag or one npm import
- Fully themeable via CSS custom properties
- Expand/collapse panel, mobile-responsive, keyboard accessible
- Install
- Quick start
- Configuration
- Tool calling
- Server route examples
- Client-side action handler
- Instance methods
- Theming
- Types
- Architecture
- License
npm install aichatsOr via CDN / script tag:
<script src="https://unpkg.com/aichats/dist/ai-chat-plugin.js"></script>import { create } from "aichats";
const chat = create({
provider: "openai",
apiKey: "sk-...",
systemPrompt: "You are a helpful assistant for an e-commerce store.",
tools: [
{
type: "function",
function: {
name: "search_products",
description: "Search the product catalog",
parameters: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
},
],
actionEndpoint: "/api/chat",
});<script src="https://unpkg.com/aichats/dist/ai-chat-plugin.js"></script>
<script>
AIChatPlugin.create({
apiKey: "sk-...",
provider: "openai",
title: "Store Assistant",
tools: [
{
type: "function",
function: {
name: "search_products",
description: "Search the product catalog",
parameters: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
},
],
actionEndpoint: "/api/chat",
});
</script>"use client";
import { useEffect } from "react";
const TOOLS = [
{
type: "function" as const,
function: {
name: "list_orders",
description: "List recent orders for the current user",
parameters: { type: "object", properties: {}, required: [] },
},
},
{
type: "function" as const,
function: {
name: "track_order",
description: "Get tracking status for an order",
parameters: {
type: "object",
properties: { order_id: { type: "string" } },
required: ["order_id"],
},
},
},
];
export default function ChatWidget() {
useEffect(() => {
import("aichats").then((mod) => {
mod.create({
provider: "openai",
apiKey: process.env.NEXT_PUBLIC_AI_KEY!,
systemPrompt: "You help customers track orders and answer questions.",
tools: TOOLS,
actionEndpoint: "/api/chat",
});
});
}, []);
return null;
}import { useEffect, useRef } from "react";
import { create, ChatBar } from "aichats";
export default function ChatWidget() {
const chatRef = useRef<ChatBar | null>(null);
useEffect(() => {
if (chatRef.current) return;
chatRef.current = create({
provider: "anthropic",
apiKey: import.meta.env.VITE_ANTHROPIC_KEY,
model: "claude-sonnet-4-6",
systemPrompt: "You are a helpful assistant.",
tools: TOOLS,
actionEndpoint: "/api/chat",
});
return () => {
chatRef.current?.destroy();
chatRef.current = null;
};
}, []);
return null;
}All options are passed to create() (or new ChatBar(config)):
| Option | Type | Default | Description |
|---|---|---|---|
provider |
"openai" | "anthropic" | "lmstudio" |
"openai" |
LLM provider. Any OpenAI-compatible API (LM Studio, Ollama, etc.) uses "openai" or "lmstudio". |
apiKey |
string |
required | API key for the provider. Stays client-side, never sent to your server. |
model |
string |
Provider default | Model override (e.g. "gpt-4o", "claude-sonnet-4-6"). |
baseUrl |
string |
Provider default | Base URL override for the LLM API. |
proxyUrl |
string |
— | Proxy all LLM requests through this URL. Useful for CORS when running local models. |
systemPrompt |
string |
— | System prompt — tells the LLM what it can do, what tools are available, and how to behave. |
tools |
ToolDef[] |
[] |
Tools the LLM can call. Uses the OpenAI function-calling schema. |
actionEndpoint |
string |
— | Server endpoint that executes tool calls. The plugin POSTs { action, args } and expects { result }. |
onAction |
(name, args) => Promise<string> |
— | Custom action handler. If set, tool calls go here instead of actionEndpoint. Return a string the LLM will see. |
maxIterations |
number |
8 |
Max LLM-to-tool round-trips per user message (prevents runaway loops). |
title |
string |
"AI Chat" |
Title shown in the chat header. |
subtitle |
string |
"Describe what you want" |
Subtitle / placeholder text. |
position |
"bottom-right" | "bottom-left" |
"bottom-right" |
Position of the chat FAB and panel on screen. |
open |
boolean |
false |
Open the chat panel immediately on init. |
onOpen |
() => void |
— | Callback when the chat panel opens. |
onClose |
() => void |
— | Callback when the chat panel closes. |
onError |
(error: Error) => void |
— | Callback on LLM or action errors. |
The plugin runs a full tool-calling loop automatically. You define tools, the LLM decides when to call them, the plugin executes the call against your server (or a client-side handler), feeds the result back to the LLM, and the LLM responds to the user. This can loop multiple times per message (up to maxIterations).
Tools use the OpenAI function-calling schema:
import type { ToolDef } from "aichats";
const tools: ToolDef[] = [
{
type: "function",
function: {
name: "search_products",
description: "Search the product catalog by keyword",
parameters: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
category: { type: "string", description: "Product category filter" },
limit: { type: "number", description: "Max results (default 10)" },
},
required: ["query"],
},
},
},
{
type: "function",
function: {
name: "get_order",
description: "Get details for a specific order by ID",
parameters: {
type: "object",
properties: {
order_id: { type: "string", description: "The order ID" },
},
required: ["order_id"],
},
},
},
{
type: "function",
function: {
name: "create_ticket",
description: "Create a support ticket",
parameters: {
type: "object",
properties: {
subject: { type: "string" },
body: { type: "string" },
priority: { type: "string", enum: ["low", "medium", "high"] },
},
required: ["subject", "body"],
},
},
},
];- User sends a message
- Plugin sends the conversation + tool definitions to the LLM
- LLM responds with text, tool calls, or both
- If tool calls are present, the plugin executes each one via
actionEndpointoronAction - Tool results are fed back to the LLM
- Steps 2-5 repeat until the LLM responds with just text (no more tool calls)
- The final text response is shown to the user
The blue status pills in the UI show brief confirmations (e.g. "list orders ✓", "Build queued"). Raw JSON and large payloads are hidden — the LLM summarizes the results in its response.
The plugin POSTs tool calls to your actionEndpoint:
POST /api/chat
Content-Type: application/json
{ "action": "search_products", "args": { "query": "shoes", "limit": 10 } }
Your server must respond with:
{ "result": "stringified result that the LLM will see" }On error:
{ "error": "Something went wrong" }The result is a string — typically JSON-stringified data. The LLM reads it and summarizes it for the user.
// app/api/chat/route.ts
import { NextResponse } from "next/server";
import { db } from "@/lib/db";
export async function POST(request: Request) {
const { action, args } = await request.json();
try {
const result = await executeAction(action, args);
return NextResponse.json({ result });
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
return NextResponse.json({ error: msg }, { status: 500 });
}
}
async function executeAction(
action: string,
args: Record<string, unknown>
): Promise<string> {
switch (action) {
case "search_products": {
const products = await db.product.findMany({
where: {
name: { contains: String(args.query), mode: "insensitive" },
...(args.category ? { category: String(args.category) } : {}),
},
take: Number(args.limit) || 10,
select: { id: true, name: true, price: true, category: true },
});
return JSON.stringify(products);
}
case "get_order": {
const order = await db.order.findUnique({
where: { id: String(args.order_id) },
include: { items: true },
});
if (!order) return "Order not found.";
return JSON.stringify(order);
}
case "create_ticket": {
const ticket = await db.ticket.create({
data: {
subject: String(args.subject),
body: String(args.body),
priority: String(args.priority ?? "medium"),
},
});
return JSON.stringify({
message: `Ticket #${ticket.id} created.`,
id: ticket.id,
});
}
default:
return `Unknown action: ${action}`;
}
}// server.ts
import express from "express";
import cors from "cors";
import { Pool } from "pg";
const app = express();
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
app.use(cors());
app.use(express.json());
app.post("/api/chat", async (req, res) => {
const { action, args } = req.body;
try {
const result = await executeAction(action, args);
res.json({ result });
} catch (err) {
res.status(500).json({ error: err.message });
}
});
async function executeAction(action: string, args: any): Promise<string> {
switch (action) {
case "search_products": {
const { rows } = await pool.query(
"SELECT id, name, price FROM products WHERE name ILIKE $1 LIMIT $2",
[`%${args.query}%`, args.limit || 10]
);
return JSON.stringify(rows);
}
case "get_order": {
const { rows } = await pool.query(
"SELECT * FROM orders WHERE id = $1",
[args.order_id]
);
if (rows.length === 0) return "Order not found.";
return JSON.stringify(rows[0]);
}
case "create_ticket": {
const { rows } = await pool.query(
"INSERT INTO tickets (subject, body, priority) VALUES ($1, $2, $3) RETURNING id",
[args.subject, args.body, args.priority ?? "medium"]
);
return JSON.stringify({ message: `Ticket #${rows[0].id} created.` });
}
default:
return `Unknown action: ${action}`;
}
}
app.listen(3001);// src/index.ts
import { Hono } from "hono";
import { cors } from "hono/cors";
const app = new Hono();
app.use("*", cors());
app.post("/api/chat", async (c) => {
const { action, args } = await c.req.json();
try {
const result = await executeAction(action, args);
return c.json({ result });
} catch (err) {
return c.json({ error: err.message }, 500);
}
});
async function executeAction(action: string, args: any): Promise<string> {
switch (action) {
case "list_items": {
const res = await fetch("https://api.example.com/items");
const items = await res.json();
return JSON.stringify(items);
}
case "create_item": {
const res = await fetch("https://api.example.com/items", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ name: args.name, description: args.description }),
});
const item = await res.json();
return JSON.stringify({ message: `Created "${item.name}".`, id: item.id });
}
default:
return `Unknown action: ${action}`;
}
}
export default app;# main.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
import json, asyncpg, os
app = FastAPI()
app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"])
class ChatAction(BaseModel):
action: str
args: dict = {}
pool = None
@app.on_event("startup")
async def startup():
global pool
pool = await asyncpg.create_pool(os.environ["DATABASE_URL"])
@app.post("/api/chat")
async def chat(body: ChatAction):
try:
result = await execute_action(body.action, body.args)
return {"result": result}
except Exception as e:
return {"error": str(e)}, 500
async def execute_action(action: str, args: dict) -> str:
if action == "search_products":
rows = await pool.fetch(
"SELECT id, name, price FROM products WHERE name ILIKE $1 LIMIT $2",
f"%{args['query']}%", args.get("limit", 10)
)
return json.dumps([dict(r) for r in rows], default=str)
if action == "get_order":
row = await pool.fetchrow("SELECT * FROM orders WHERE id = $1", args["order_id"])
if not row:
return "Order not found."
return json.dumps(dict(row), default=str)
if action == "create_ticket":
row = await pool.fetchrow(
"INSERT INTO tickets (subject, body, priority) VALUES ($1, $2, $3) RETURNING id",
args["subject"], args["body"], args.get("priority", "medium")
)
return json.dumps({"message": f"Ticket #{row['id']} created."})
return f"Unknown action: {action}"The result string is what the LLM sees. Return data the LLM can interpret:
// Good — the LLM can read and summarize this
return JSON.stringify(products);
// Good — short message shown directly in the blue status pill
return JSON.stringify({ message: "Build queued." });
// Good — data + human-readable message
return JSON.stringify({
message: "Found 3 products.",
products: [{ name: "Shoes", price: 89.99 }, ...],
});
// Also fine — plain text
return "No results found.";If your result JSON contains a message field that is under 80 characters, it is shown as a blue status pill in the chat UI. Otherwise the pill shows a brief "tool_name ✓" and the LLM summarizes the data in its response.
For tools that don't need a server (e.g. client-side state, browser APIs, or in-memory data), use onAction instead of actionEndpoint:
create({
provider: "openai",
apiKey: "sk-...",
tools: [
{
type: "function",
function: {
name: "get_cart",
description: "Get the current shopping cart contents",
parameters: { type: "object", properties: {}, required: [] },
},
},
{
type: "function",
function: {
name: "add_to_cart",
description: "Add a product to the cart",
parameters: {
type: "object",
properties: {
product_id: { type: "string" },
quantity: { type: "number" },
},
required: ["product_id"],
},
},
},
],
onAction: async (name, args) => {
switch (name) {
case "get_cart":
return JSON.stringify(cartStore.getItems());
case "add_to_cart":
cartStore.add(args.product_id as string, (args.quantity as number) ?? 1);
return JSON.stringify({ message: "Added to cart." });
default:
return `Unknown action: ${name}`;
}
},
});Use onAction to handle some tools client-side and fall through to your server for others:
create({
// ...
onAction: async (name, args) => {
// Handle client-side tools
if (name === "get_cart") return JSON.stringify(cartStore.getItems());
if (name === "get_theme") return document.documentElement.dataset.theme ?? "light";
// Everything else goes to the server
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ action: name, args }),
});
const json = await res.json();
if (!res.ok) throw new Error(json.error ?? "Action failed");
return json.result;
},
});create() returns a ChatBar instance:
import { create } from "aichats";
const chat = create({ ... });| Method | Description |
|---|---|
toggle(open?) |
Toggle the chat panel. Pass true to open, false to close. |
toggleExpand(expanded?) |
Toggle expanded view. The panel grows to 800px wide. Pass true/false to force. |
send(text) |
Programmatically send a message as the user. Returns a promise. |
setTools(tools) |
Replace the available tools at runtime. |
setSystemPrompt(prompt) |
Replace the system prompt at runtime. |
clear() |
Clear the conversation history and reset the UI. |
destroy() |
Remove the chat bar from the DOM entirely. |
// Open the chat on page load
chat.toggle(true);
// Expand to the larger view
chat.toggleExpand(true);
// Send a message programmatically
await chat.send("Show me my recent orders");
// Update tools based on user context
chat.setTools([...baseTools, ...adminTools]);
// Clean up on unmount
chat.destroy();All styles use CSS custom properties scoped under .acp-root. Override them anywhere on your page — no build step required.
| Variable | Default | Description |
|---|---|---|
--acp-primary |
#18181b |
Primary color (FAB, send button, user messages) |
--acp-primary-hover |
#27272a |
Primary hover state |
--acp-bg |
#ffffff |
Panel background |
--acp-bg-muted |
#f4f4f5 |
Muted background (assistant messages) |
--acp-bg-accent |
#fafafa |
Accent background (header, input area) |
--acp-text |
#18181b |
Primary text color |
--acp-text-muted |
#71717a |
Secondary text color |
--acp-text-inverse |
#ffffff |
Text on primary backgrounds |
--acp-border |
#e4e4e7 |
Border color |
--acp-blue |
#3b82f6 |
Action pill text color |
--acp-blue-light |
#eff6ff |
Action pill background |
--acp-radius |
12px |
Panel border radius |
--acp-radius-sm |
8px |
Message border radius |
--acp-font |
system-ui, -apple-system, sans-serif |
Font family |
--acp-font-size |
14px |
Base font size |
--acp-font-size-sm |
12px |
Small font size |
--acp-shadow |
0 8px 30px rgba(0,0,0,.12) |
Panel shadow |
--acp-width |
420px |
Panel width (default state) |
--acp-height |
600px |
Panel height (default state) |
--acp-z |
99999 |
z-index for the FAB and panel |
:root {
--acp-primary: #6366f1;
--acp-primary-hover: #4f46e5;
--acp-bg: #0f172a;
--acp-bg-muted: #1e293b;
--acp-bg-accent: #0f172a;
--acp-text: #e2e8f0;
--acp-text-muted: #94a3b8;
--acp-text-inverse: #ffffff;
--acp-border: #334155;
--acp-blue: #60a5fa;
--acp-blue-light: #1e3a5f;
--acp-shadow: 0 8px 30px rgba(0, 0, 0, 0.4);
}/* Green / eco brand */
:root {
--acp-primary: #16a34a;
--acp-primary-hover: #15803d;
--acp-blue: #16a34a;
--acp-blue-light: #f0fdf4;
}
/* Purple / SaaS */
:root {
--acp-primary: #7c3aed;
--acp-primary-hover: #6d28d9;
--acp-blue: #7c3aed;
--acp-blue-light: #f5f3ff;
}/* Wider panel */
:root {
--acp-width: 500px;
--acp-height: 700px;
}
/* Full-height sidebar */
:root {
--acp-width: 380px;
--acp-height: calc(100vh - 40px);
}If your app uses Tailwind, reference Tailwind's CSS variables:
:root {
--acp-primary: theme(colors.indigo.600);
--acp-primary-hover: theme(colors.indigo.700);
--acp-bg: theme(colors.white);
--acp-bg-muted: theme(colors.gray.100);
--acp-text: theme(colors.gray.900);
--acp-text-muted: theme(colors.gray.500);
--acp-border: theme(colors.gray.200);
--acp-font: theme(fontFamily.sans);
}The variables are scoped to .acp-root, so you can also override per-instance if you have multiple chat bars (rare but possible):
/* Only affects the chat bar, not the rest of the page */
.acp-root {
--acp-primary: #dc2626;
}- On screens under 480px, the panel automatically goes fullscreen (100vw x 100vh)
- The expanded state (
toggleExpand()) stretches tomin(800px, calc(100vw - 40px)) - Both states adapt to mobile automatically
All types are exported for TypeScript:
import type {
ChatBarConfig,
ChatMessage,
Provider,
ToolDef,
ToolCall,
LLMResult,
} from "aichats";interface ToolDef {
type: "function";
function: {
name: string;
description: string;
parameters: Record<string, unknown>; // JSON Schema
};
}interface ChatBarConfig {
provider?: "openai" | "anthropic" | "lmstudio";
apiKey: string;
model?: string;
baseUrl?: string;
proxyUrl?: string;
systemPrompt?: string;
tools?: ToolDef[];
actionEndpoint?: string;
onAction?: (name: string, args: Record<string, unknown>) => Promise<string>;
maxIterations?: number;
title?: string;
subtitle?: string;
position?: "bottom-right" | "bottom-left";
open?: boolean;
onOpen?: () => void;
onClose?: () => void;
onError?: (error: Error) => void;
}interface ChatMessage {
role: "system" | "user" | "assistant" | "tool";
content: string;
tool_call_id?: string;
tool_calls?: ToolCall[];
}interface ToolCall {
id: string;
function: { name: string; arguments: string };
}
interface LLMResult {
text: string;
toolCalls: ToolCall[];
}┌──────────────────────────────────────────────────────┐
│ Browser │
│ │
│ ┌────────────┐ ┌─────────────┐ │
│ │ Your App │────▶│ aichats │ │
│ │ (create()) │ │ ChatBar │ │
│ └────────────┘ └──────┬──────┘ │
│ │ │
│ ┌─────────────┼─────────────┐ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌────────────┐ │
│ │ OpenAI │ │ Anthropic│ │ LM Studio │ │
│ │ API │ │ API │ │ (local) │ │
│ └──────────┘ └──────────┘ └────────────┘ │
│ │ │
│ ▼ (tool calls) │
│ ┌──────────────────┐ │
│ │ actionEndpoint │──── POST { action, args }│
│ │ or onAction() │◀─── { result } │
│ └──────────────────┘ │
│ │
└──────────────────────────────────────────────────────┘
│
▼
┌──────────────────────┐
│ Your Server │
│ /api/chat │
│ │
│ switch (action) { │
│ case "search": │
│ → query DB │
│ case "create": │
│ → insert DB │
│ } │
└──────────────────────┘
- aichats renders the chat UI and manages the conversation
- LLM requests go directly from the browser to the provider (OpenAI, Anthropic, etc.)
- When the LLM makes tool calls, aichats sends them to your
actionEndpoint(oronAction) - Your server executes the action (DB queries, API calls, etc.) and returns a string result
- aichats feeds the result back to the LLM, which responds to the user
- The API key lives in the browser config — your server never sees it
MIT