Write complex integration tests with AI - AI assistants see your live page structure, execute code, and iterate until tests work
- Quick Start
- Why Testing MCP
- What Testing MCP Does
- Installation
- Configure MCP Server
- Connect From Tests
- MCP Tools
- Context and Available APIs
- Environment Variables
- FAQ
- How It Works
Step 1: Install
npm install -D testing-mcpStep 2: Configure Model Context Protocol (MCP) server (e.g., in Claude Desktop config):
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"]
}
}Step 3: Connect from your test:
import { render, screen, fireEvent } from "@testing-library/react";
import { connect } from "testing-mcp";
it("your test", async () => {
render(<YourComponent />);
await connect({
context: { screen, fireEvent },
});
}, 600000); // 10 minute timeout for AI interactionStep 4: Run with MCP enabled:
Prompt:
Please run the persistent test: `TESTING_MCP=true npm test test/example.test.tsx`,
Then use testing-mcp to write the test in `test/example.test.tsx` with these steps:
1. Click the “count” button.
2. Verify that the number on the count button becomes “1”.
Now your AI assistant can see the page structure, execute code in the test, and help you write assertions.
Traditional test writing is slow and frustrating:
- Write → Run → Read errors → Guess → Repeat - endless debugging cycles
- Add
console.logstatements manually - slow feedback loop - AI assistants can't see your test state - you must describe everything
- Must manually explain available APIs - AI generates invalid code
Testing MCP solves this by giving AI assistants live access to your test environment:
- AI sees actual page structure (DOM), console logs, and rendered output
- AI executes code directly in tests without editing files
- AI knows exactly which testing APIs are available (screen, fireEvent, etc.)
- You iterate faster with real-time feedback instead of blind guessing
View live page structure snapshots, console logs, and test metadata through MCP tools. No more adding temporary console.log statements or running tests repeatedly.
Execute JavaScript/TypeScript directly in your running test environment. Test interactions, check page state, or run assertions without modifying test files.
Automatically collects and exposes available testing APIs (like screen, fireEvent, waitFor) with type information and descriptions. AI assistants know exactly what's available and generate valid code on the first try.
await connect({
context: { screen, fireEvent, waitFor },
contextDescriptions: {
screen: "React Testing Library screen with query methods",
fireEvent: "Function to trigger DOM events",
},
});Reliable WebSocket connections with session tracking, reconnection support, and automatic cleanup. Multiple tests can connect simultaneously.
Automatically disabled in continuous integration (CI) environments. The connect() call becomes a no-op when TESTING_MCP is not set(particularly utilised hooks), so your tests run normally in production.
Built specifically for AI assistants and the Model Context Protocol. Provides structured metadata, clear tool descriptions, and predictable responses optimized for AI understanding.
Install dependencies and build the project before launching the MCP server or consuming the client helper.
npm install -D testing-mcp
# or
yarn add -D testing-mcp
# or
pnpm add -D testing-mcpNode 18+ is required because the project uses ES modules and the WebSocket API.
Add the MCP server to your AI assistant's configuration (e.g., Claude Desktop, VSCode, etc.):
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"]
}
}The server opens a WebSocket bridge on port 3001 (configurable) and registers MCP tools for state inspection, file editing, and remote code execution.
Import the client helper in your Jest or Vitest suites hook to expose the page state to the MCP server.
Example Jest setup file(setupFilesAfterEnv)
// jest.setup.ts
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 * 60 * 1000;
if (process.env.TESTING_MCP) {
jest.setTimeout(timeout);
}
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
port: 3001,
filePath: state.testPath,
context: {
userEvent,
screen,
fireEvent,
},
});
}, timeout);It also supports usage in test files:
// example.test.tsx
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
it(
"logs the dashboard state",
async () => {
render(<Dashboard />);
await connect({
port: 3001,
filePath: import.meta.url,
context: {
screen,
fireEvent,
userEvent,
waitFor,
},
// Optional: provide descriptions to help LLMs understand the APIs
contextDescriptions: {
screen: "React Testing Library screen with query methods",
fireEvent: "Synchronous event triggering function",
userEvent: "User interaction simulation library",
waitFor: "Async utility for waiting on conditions",
},
});
},
1000 * 60 * 10
);Set TESTING_MCP=true locally to enable the bridge. The helper no-ops when the variable is missing or the tests run in continuous integration.
If the DOM has been automatically cleared after the
afterEachhook executes, please setRTL_SKIP_AUTO_CLEANUP=true.
Once connected, your AI assistant can use these tools:
| Tool | Purpose | When to Use |
|---|---|---|
get_current_test_state |
Fetch current page structure, console logs, and APIs | Inspect what's rendered and what APIs are available |
execute_test_step |
Run JavaScript/TypeScript code in the test environment | Trigger interactions, check state, run assertions |
finalize_test |
Remove connect() call and clean up test file |
After test is complete and working |
list_active_tests |
Show all connected tests with timestamps | See which tests are available |
get_generated_code |
Extract code blocks inserted by the helper | Audit what code was added |
Returns the current test state including:
- Page structure snapshot: Current rendered HTML (DOM)
- Console logs: Captured console output
- Test metadata: Test file path, test name, session ID
- Available context: List of all APIs/variables available in
execute_test_step, including their types, signatures, and descriptions
Response includes availableContext field:
{
"availableContext": [
{
"name": "screen",
"type": "object",
"description": "React Testing Library screen object"
},
{
"name": "fireEvent",
"type": "function",
"signature": "(element, event) => ...",
"description": "Function to trigger DOM events"
}
]
}Executes JavaScript/TypeScript code in the connected test client. The code can use any APIs listed in the availableContext field from get_current_test_state.
Best Practice: Always call get_current_test_state first to check which APIs are available before using execute_test_step.
Inject testing utilities so AI knows what's available:
The connect() function accepts a context object that exposes APIs to the test execution environment. This allows AI assistants to know exactly what APIs are available when generating code.
await connect({
context: {
screen, // React Testing Library queries
fireEvent, // DOM event triggering
userEvent, // User interaction simulation
waitFor, // Async waiting utility
},
});Provide descriptions for each context key to help AI understand what's available:
await connect({
context: {
screen,
fireEvent,
waitFor,
customHelper: async (text: string) => {
const button = screen.getByText(text);
fireEvent.click(button);
await waitFor(() => {});
},
},
contextDescriptions: {
screen: "Query methods like getByText, findByRole, etc.",
fireEvent: "Trigger DOM events: click, change, etc.",
waitFor: "Wait for assertions: waitFor(() => expect(...).toBe(...))",
customHelper: "async (text: string) => void - Clicks button by text",
},
});How it works: The client collects metadata (name, type, function signature) for each context key. When AI calls get_current_test_state, it receives the full list of available APIs with their metadata, enabling accurate code generation.
TESTING_MCP: When set totrue, enables the WebSocket bridge to the MCP server. Leave unset to disable (automatically disabled in CI environments).TESTING_MCP_PORT: Overrides the WebSocket port. Defaults to3001. Set this if the default port is occupied or you want multiple servers running.
Custom port example:
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"],
"env": {
"TESTING_MCP_PORT": "4001"
}
}
}If you see that testing-mcp fails to start in Cursor IDE, you can check detailed logs:
In Cursor IDE: Go to Output > MCP:user-testing-mcp to see detailed error information.
This will show you the exact error messages and help diagnose startup issues.
Each MCP client instance needs a unique port. If you want to run multiple testing-mcp instances simultaneously:
- Set different
TESTING_MCP_PORTvalues for each instance in MCP server config. - Pass the same port number to the
connect()function in your tests
// In your test
await connect({
port: 4001, // Match your custom port
context: { screen, fireEvent },
});For example, kill a process using the default port (macOS):
lsof -ti:3001 | xargs kill -9Testing MCP currently supports only one WebSocket connection per test at a time.
When your MCP client runs the same test command multiple times (like in watch mode), each run creates a new WebSocket connection. This can cause conflicts and unexpected behavior.
Recommendation: Run tests individually without watch mode when using TESTING_MCP=true.
If tests with TESTING_MCP=true timeout quickly, you need to increase the test timeout.
AI assistants need time to inspect state and write tests - usually 5+ minutes minimum.
Set timeout in your test:
it("your test", async () => {
render(<YourComponent />);
await connect({ context: { screen, fireEvent } });
}, 600000); // 10 minutes = 600000msYes, if your tests don't automatically clear the DOM between tests.
By placing connect() in an afterEach hook in your setup file, you can make testing completely non-invasive and easier for automated test writing.
Example Jest setup file(setupFilesAfterEnv)
// jest.setup.ts
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 * 60 * 1000;
if (process.env.TESTING_MCP) {
jest.setTimeout(timeout);
}
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
port: 3001,
filePath: state.testPath,
context: {
userEvent,
screen,
fireEvent,
},
});
}, timeout);Example Vitest setup file(setupFiles):
// vitest.setup.ts
import { beforeEach, afterEach, expect } from "vitest";
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 * 60 * 1000;
beforeEach((context) => {
if (!process.env.TESTING_MCP) return;
Object.assign(context.task, {
timeout,
});
});
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
port: 3001,
filePath: state.testPath,
context: {
userEvent,
screen,
expect,
fireEvent,
},
});
}, timeout);Important: This approach only works if your afterEach hooks don't automatically remove the DOM (e.g., you're not calling cleanup() before connect()).
Testing MCP uses a three-process architecture:
- Test process calls
connect()to send page snapshots, console logs, and metadata to the server - MCP server manages WebSocket connections, stores session state, and exposes MCP tools via Stdio
- AI assistant calls MCP tools to inspect state and execute code remotely
Communication stays resilient to reconnections by tracking per-session UUIDs and cleaning up callbacks on close.
The system consists of three independent processes that communicate through two different protocols:
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Node.js Test │ │ MCP Server │ │ LLM/MCP │
│ Process │ │ Process │ │ Client │
└────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘
│ │ │
│ │◄───────────────────────────┤
│ │ 1. MCP Tool Call │
│ │ (via Stdio/JSON-RPC) │
│ │ │
│ 2. await connect() │ │
├───────────────────────────►│ │
│ Collects DOM & context │ │
│ │ │
│ 3. WebSocket: "ready" │ │
│ {dom, logs, context} │ │
├───────────────────────────►│ │
│ │ Stores session state │
│ │ │
│ 4. "connected" │ │
│ {sessionId} │ │
│◄───────────────────────────┤ │
│ │ │
│ Test waits... │ 5. Returns state │
│ ├───────────────────────────►│
│ │ {dom, logs, context} │
│ │ │
│ │◄───────────────────────────┤
│ │ 6. execute_test_step │
│ │ {code, sessionId} │
│ │ │
│ 7. "execute" │ │
│ {code, executionId} │ │
│◄───────────────────────────┤ │
│ │ │
│ Runs code with │ │
│ available context │ │
│ (screen, fireEvent...) │ │
│ │ │
│ 8. "executed" │ │
│ {result, newState} │ │
├───────────────────────────►│ │
│ │ 9. Returns result │
│ ├───────────────────────────►│
│ Test waits... │ {result, newState} │
│ │ │
│ │◄───────────────────────────┤
│ │ 10. finalize_test │
│ │ │
│ 11. "close" │ Removes connect() call │
│◄───────────────────────────┤ from test file (AST) │
│ │ │
│ Closes WebSocket │ │
│ Test completes │ │
│ │ 12. Returns success │
│ ├───────────────────────────►│
▼ ▼ ▼
Protocol Summary:
─────────────────
• Test Process ←→ MCP Server: WebSocket (port 3001)
Message types: ready, connected, execute, executed, close
• MCP Server ←→ LLM Client: Stdio/JSON-RPC (MCP Protocol)
Tools: get_current_test_state, execute_test_step, finalize_test,
list_active_tests, get_generated_code
- AI initiates: AI assistant calls MCP tools via Stdio to interact with tests
- Test connects: Test process calls
await connect()which establishes WebSocket to MCP server - Bidirectional sync: Test sends state updates; server executes code remotely
- Session tracking: Each test gets unique
sessionIdfor managing multiple concurrent connections - Automatic cleanup: Server uses Abstract Syntax Tree (AST) manipulation to remove
connect()calls when finalizing
MIT