Testing MCP
Write complex integration tests with AI - AI assistants see your live page structure, execute code, and iterate until tests work
Table of Contents
- Quick Start
- Why Testing MCP
- What Testing MCP Does
- Installation
- Configure MCP Server
- Connect From Tests
- MCP Tools
- Context and Available APIs
- Environment Variables
- FAQ
- How It Works
Quick Start
Step 1: Install
npm install -D testing-mcp
Step 2: Configure Model Context Protocol (MCP) server (e.g., in Claude Desktop config):
{
"testing-mcp": {
"command":...
Testing MCP
Write complex integration tests with AI - AI assistants see your live page structure, execute code, and iterate until tests work
Table of Contents
- Quick Start
- Why Testing MCP
- What Testing MCP Does
- Installation
- Configure MCP Server
- Connect From Tests
- MCP Tools
- Context and Available APIs
- Environment Variables
- FAQ
- How It Works
Quick Start
Step 1: Install
npm install -D testing-mcp
Step 2: Configure Model Context Protocol (MCP) server (e.g., in Claude Desktop config):
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"]
}
}
Step 3: Connect from your test:
import { render, screen, fireEvent } from "@testing-library/react";
import { connect } from "testing-mcp";
it("your test", async () => {
render(<YourComponent />);
await connect({
context: { screen, fireEvent },
});
}, 600000); // 10 minute timeout for AI interaction
Step 4: Run with MCP enabled:
Prompt:
Please run the persistent test: `TESTING_MCP=true npm test test/example.test.tsx`,
Then use testing-mcp to write the test in `test/example.test.tsx` with these steps:
1. Click the βcountβ button.
2. Verify that the number on the count button becomes β1β.
Now your AI assistant can see the page structure, execute code in the test, and help you write assertions.
Why Testing MCP
Traditional test writing is slow and frustrating:
- Write β Run β Read errors β Guess β Repeat - endless debugging cycles
- Add
console.logstatements manually - slow feedback loop - AI assistants canβt see your test state - you must describe everything
- Must manually explain available APIs - AI generates invalid code
Testing MCP solves this by giving AI assistants live access to your test environment:
- AI sees actual page structure (DOM), console logs, and rendered output
- AI executes code directly in tests without editing files
- AI knows exactly which testing APIs are available (screen, fireEvent, etc.)
- You iterate faster with real-time feedback instead of blind guessing
What Testing MCP Does
π Real-Time Test Inspection
View live page structure snapshots, console logs, and test metadata through MCP tools. No more adding temporary console.log statements or running tests repeatedly.
π― Remote Code Execution
Execute JavaScript/TypeScript directly in your running test environment. Test interactions, check page state, or run assertions without modifying test files.
π§ Smart Context Awareness
Automatically collects and exposes available testing APIs (like screen, fireEvent, waitFor) with type information and descriptions. AI assistants know exactly whatβs available and generate valid code on the first try.
await connect({
context: { screen, fireEvent, waitFor },
contextDescriptions: {
screen: "React Testing Library screen with query methods",
fireEvent: "Function to trigger DOM events",
},
});
π Session Management
Reliable WebSocket connections with session tracking, reconnection support, and automatic cleanup. Multiple tests can connect simultaneously.
π« Zero CI Overhead
Automatically disabled in continuous integration (CI) environments. The connect() call becomes a no-op when TESTING_MCP is not set(particularly utilised hooks), so your tests run normally in production.
π€ AI-First Design
Built specifically for AI assistants and the Model Context Protocol. Provides structured metadata, clear tool descriptions, and predictable responses optimized for AI understanding.
Installation
Install dependencies and build the project before launching the MCP server or consuming the client helper.
npm install -D testing-mcp
# or
yarn add -D testing-mcp
# or
pnpm add -D testing-mcp
Node 18+ is required because the project uses ES modules and the WebSocket API.
Configure MCP Server
Add the MCP server to your AI assistantβs configuration (e.g., Claude Desktop, VSCode, etc.):
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"]
}
}
The server opens a WebSocket bridge on port 3001 (configurable) and registers MCP tools for state inspection, file editing, and remote code execution.
Connect From Tests
Import the client helper in your Jest or Vitest suites hook to expose the page state to the MCP server.
Example Jest setup file(setupFilesAfterEnv)
// jest.setup.ts
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 * 60 * 1000;
if (process.env.TESTING_MCP) {
jest.setTimeout(timeout);
}
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
port: 3001,
filePath: state.testPath,
context: {
userEvent,
screen,
fireEvent,
},
});
}, timeout);
It also supports usage in test files:
// example.test.tsx
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
it(
"logs the dashboard state",
async () => {
render(<Dashboard />);
await connect({
port: 3001,
filePath: import.meta.url,
context: {
screen,
fireEvent,
userEvent,
waitFor,
},
// Optional: provide descriptions to help LLMs understand the APIs
contextDescriptions: {
screen: "React Testing Library screen with query methods",
fireEvent: "Synchronous event triggering function",
userEvent: "User interaction simulation library",
waitFor: "Async utility for waiting on conditions",
},
});
},
1000 * 60 * 10
);
Set TESTING_MCP=true locally to enable the bridge. The helper no-ops when the variable is missing or the tests run in continuous integration.
If the DOM has been automatically cleared after the
afterEachhook executes, please setRTL_SKIP_AUTO_CLEANUP=true.
MCP Tools
Once connected, your AI assistant can use these tools:
| Tool | Purpose | When to Use |
|---|---|---|
get_current_test_state | Fetch current page structure, console logs, and APIs | Inspect whatβs rendered and what APIs are available |
execute_test_step | Run JavaScript/TypeScript code in the test environment | Trigger interactions, check state, run assertions |
finalize_test | Remove connect() call and clean up test file | After test is complete and working |
list_active_tests | Show all connected tests with timestamps | See which tests are available |
get_generated_code | Extract code blocks inserted by the helper | Audit what code was added |
get_current_test_state
Returns the current test state including:
- Page structure snapshot: Current rendered HTML (DOM)
- Console logs: Captured console output
- Test metadata: Test file path, test name, session ID
- Available context: List of all APIs/variables available in
execute_test_step, including their types, signatures, and descriptions
Response includes availableContext field:
{
"availableContext": [
{
"name": "screen",
"type": "object",
"description": "React Testing Library screen object"
},
{
"name": "fireEvent",
"type": "function",
"signature": "(element, event) => ...",
"description": "Function to trigger DOM events"
}
]
}
execute_test_step
Executes JavaScript/TypeScript code in the connected test client. The code can use any APIs listed in the availableContext field from get_current_test_state.
Best Practice: Always call get_current_test_state first to check which APIs are available before using execute_test_step.
Context and Available APIs
Inject testing utilities so AI knows whatβs available:
The connect() function accepts a context object that exposes APIs to the test execution environment. This allows AI assistants to know exactly what APIs are available when generating code.
Basic Usage
await connect({
context: {
screen, // React Testing Library queries
fireEvent, // DOM event triggering
userEvent, // User interaction simulation
waitFor, // Async waiting utility
},
});
Adding Descriptions (Recommended)
Provide descriptions for each context key to help AI understand whatβs available:
await connect({
context: {
screen,
fireEvent,
waitFor,
customHelper: async (text: string) => {
const button = screen.getByText(text);
fireEvent.click(button);
await waitFor(() => {});
},
},
contextDescriptions: {
screen: "Query methods like getByText, findByRole, etc.",
fireEvent: "Trigger DOM events: click, change, etc.",
waitFor: "Wait for assertions: waitFor(() => expect(...).toBe(...))",
customHelper: "async (text: string) => void - Clicks button by text",
},
});
How it works: The client collects metadata (name, type, function signature) for each context key. When AI calls get_current_test_state, it receives the full list of available APIs with their metadata, enabling accurate code generation.
Environment Variables
TESTING_MCP: When set totrue, enables the WebSocket bridge to the MCP server. Leave unset to disable (automatically disabled in CI environments).TESTING_MCP_PORT: Overrides the WebSocket port. Defaults to3001. Set this if the default port is occupied or you want multiple servers running.
Custom port example:
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"],
"env": {
"TESTING_MCP_PORT": "4001"
}
}
}
FAQ
1. How do I view MCP errors?
If you see that testing-mcp fails to start in Cursor IDE, you can check detailed logs:
In Cursor IDE: Go to Output > MCP:user-testing-mcp to see detailed error information.
This will show you the exact error messages and help diagnose startup issues.
2. What if the port is already in use?
Each MCP client instance needs a unique port. If you want to run multiple testing-mcp instances simultaneously:
- Set different
TESTING_MCP_PORTvalues for each instance in MCP server config. - Pass the same port number to the
connect()function in your tests
// In your test
await connect({
port: 4001, // Match your custom port
context: { screen, fireEvent },
});
For example, kill a process using the default port (macOS):
lsof -ti:3001 | xargs kill -9
3. Why shouldnβt I use watch mode?
Testing MCP currently supports only one WebSocket connection per test at a time.
When your MCP client runs the same test command multiple times (like in watch mode), each run creates a new WebSocket connection. This can cause conflicts and unexpected behavior.
Recommendation: Run tests individually without watch mode when using TESTING_MCP=true.
4. My tests timeout immediately - whatβs wrong?
If tests with TESTING_MCP=true timeout quickly, you need to increase the test timeout.
AI assistants need time to inspect state and write tests - usually 5+ minutes minimum.
Set timeout in your test:
it("your test", async () => {
render(<YourComponent />);
await connect({ context: { screen, fireEvent } });
}, 600000); // 10 minutes = 600000ms
5. Can I put connect() in a test setup file instead of each test?
Yes, if your tests donβt automatically clear the DOM between tests.
By placing connect() in an afterEach hook in your setup file, you can make testing completely non-invasive and easier for automated test writing.
Example Jest setup file(setupFilesAfterEnv)
// jest.setup.ts
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 * 60 * 1000;
if (process.env.TESTING_MCP) {
jest.setTimeout(timeout);
}
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
port: 3001,
filePath: state.testPath,
context: {
userEvent,
screen,
fireEvent,
},
});
}, timeout);
Example Vitest setup file(setupFiles):
// vitest.setup.ts
import { beforeEach, afterEach, expect } from "vitest";
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 * 60 * 1000;
beforeEach((context) => {
if (!process.env.TESTING_MCP) return;
Object.assign(context.task, {
timeout,
});
});
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
port: 3001,
filePath: state.testPath,
context: {
userEvent,
screen,
expect,
fireEvent,
},
});
}, timeout);
Important: This approach only works if your afterEach hooks donβt automatically remove the DOM (e.g., youβre not calling cleanup() before connect()).
How It Works
Testing MCP uses a three-process architecture:
- Test process calls
connect()to send page snapshots, console logs, and metadata to the server - MCP server manages WebSocket connections, stores session state, and exposes MCP tools via Stdio
- AI assistant calls MCP tools to inspect state and execute code remotely
Communication stays resilient to reconnections by tracking per-session UUIDs and cleaning up callbacks on close.
Process Interaction Sequence Diagram
The system consists of three independent processes that communicate through two different protocols:
ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
β Node.js Test β β MCP Server β β LLM/MCP β
β Process β β Process β β Client β
ββββββββββ¬ββββββββββ ββββββββββ¬ββββββββββ ββββββββββ¬ββββββββββ
β β β
β ββββββββββββββββββββββββββββββ€
β β 1. MCP Tool Call β
β β (via Stdio/JSON-RPC) β
β β β
β 2. await connect() β β
βββββββββββββββββββββββββββββΊβ β
β Collects DOM & context β β
β β β
β 3. WebSocket: "ready" β β
β {dom, logs, context} β β
βββββββββββββββββββββββββββββΊβ β
β β Stores session state β
β β β
β 4. "connected" β β
β {sessionId} β β
ββββββββββββββββββββββββββββββ€ β
β β β
β Test waits... β 5. Returns state β
β βββββββββββββββββββββββββββββΊβ
β β {dom, logs, context} β
β β β
β ββββββββββββββββββββββββββββββ€
β β 6. execute_test_step β
β β {code, sessionId} β
β β β
β 7. "execute" β β
β {code, executionId} β β
ββββββββββββββββββββββββββββββ€ β
β β β
β Runs code with β β
β available context β β
β (screen, fireEvent...) β β
β β β
β 8. "executed" β β
β {result, newState} β β
βββββββββββββββββββββββββββββΊβ β
β β 9. Returns result β
β βββββββββββββββββββββββββββββΊβ
β Test waits... β {result, newState} β
β β β
β ββββββββββββββββββββββββββββββ€
β β 10. finalize_test β
β β β
β 11. "close" β Removes connect() call β
ββββββββββββββββββββββββββββββ€ from test file (AST) β
β β β
β Closes WebSocket β β
β Test completes β β
β β 12. Returns success β
β βββββββββββββββββββββββββββββΊβ
βΌ βΌ βΌ
Protocol Summary:
βββββββββββββββββ
β’ Test Process ββ MCP Server: WebSocket (port 3001)
Message types: ready, connected, execute, executed, close
β’ MCP Server ββ LLM Client: Stdio/JSON-RPC (MCP Protocol)
Tools: get_current_test_state, execute_test_step, finalize_test,
list_active_tests, get_generated_code
Key Interactions
- AI initiates: AI assistant calls MCP tools via Stdio to interact with tests
- Test connects: Test process calls
await connect()which establishes WebSocket to MCP server - Bidirectional sync: Test sends state updates; server executes code remotely
- Session tracking: Each test gets unique
sessionIdfor managing multiple concurrent connections - Automatic cleanup: Server uses Abstract Syntax Tree (AST) manipulation to remove
connect()calls when finalizing
License
MIT