Mock infrastructure for AI application testing — LLM APIs, MCP tools, A2A agents, vector databases, search, rerank, and moderation. One package, one port, zero dependencies.
npm install @copilotkit/aimockimport { LLMock } from "@copilotkit/aimock";
const mock = new LLMock({ port: 0 });
mock.onMessage("hello", { content: "Hi there!" });
await mock.start();
process.env.OPENAI_BASE_URL = `${mock.url}/v1`;
// ... run your tests ...
await mock.stop();aimock mocks everything your AI app talks to:
| Tool | What it mocks | Docs |
|---|---|---|
| LLMock | OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere | Providers |
| MCPMock | MCP tools, resources, prompts with session management | MCP |
| A2AMock | Agent-to-agent protocol with SSE streaming | A2A |
| VectorMock | Pinecone, Qdrant, ChromaDB compatible endpoints | Vector |
| Services | Tavily search, Cohere rerank, OpenAI moderation | Services |
Run them all on one port with npx aimock --config aimock.json, or use the programmatic API to compose exactly what you need.
- Record & Replay — Proxy real APIs, save as fixtures, replay deterministically forever
- 11 LLM Providers — OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere — full streaming support
- MCP / A2A / Vector — Mock every protocol your AI agents use
- Chaos Testing — 500 errors, malformed JSON, mid-stream disconnects at any probability
- Drift Detection — Daily CI validation against real APIs
- Streaming Physics — Configurable
ttft,tps, andjitter - WebSocket APIs — OpenAI Realtime, Responses WS, Gemini Live
- Prometheus Metrics — Request counts, latencies, fixture match rates
- Docker + Helm — Container image and Helm chart for CI/CD
- Zero dependencies — Everything from Node.js builtins
# LLM mocking only
npx aimock -p 4010 -f ./fixtures
# Full suite from config
npx aimock --config aimock.json
# Record mode: proxy to real APIs, save fixtures
npx aimock --record --provider-openai https://api.openai.com
# Docker
docker run -d -p 4010:4010 -v ./fixtures:/fixtures ghcr.io/copilotkit/aimock -f /fixturesStep-by-step migration guides: MSW · VidaiMock · mock-llm · Python mocks · Mokksy
AG-UI uses aimock for its end-to-end test suite, verifying AI agent behavior across LLM providers with fixture-driven responses.
MIT