Full documentation lives at https://axisproject.ai:
AXIS is a synthetic testing framework for measuring how well systems support AI agent interaction. Think Lighthouse, but instead of scoring user experience, AXIS scores agent experience.
Given a scenario, an agent, and a prompt, AXIS runs the agent, captures a full transcript of the interaction, and produces a graded score across four independent dimensions: Goal Achievement, Environment, Service, and Agent.
The web has Lighthouse. APIs have contract testing. Performance has k6. But there's no standardized way to answer: "How well does my system work when an AI agent tries to use it?"
As agents become a primary interface for interacting with websites, APIs, and developer platforms, the systems they interact with need to be measured and optimized for that experience, just like we optimize for page load time or accessibility.
npm install @netlify/axisaxis.config.json:
{
"scenarios": "./scenarios",
"agents": ["claude-code"]
}scenarios/hello-world.json:
{
"name": "Hello World",
"prompt": "Navigate to https://example.com and describe what you see on the page.",
"rubric": [
{ "check": "Agent visited the target URL", "weight": 0.5 },
{ "check": "Agent provided a description of the page content", "weight": 0.5 }
]
}axis runAXIS executes the scenario, scores the result, and writes a report to .axis/reports/.
Full documentation lives at https://axisproject.ai:
- Overview - what AXIS measures and why
- Quick Start - install through your first scored run
- Configuration -
axis.config.json, scenarios, MCP servers, skills - CLI Reference -
axis run,axis reports,axis baseline - Running Tests - execution model, workspace isolation, custom adapters, CI integration
- Scoring Framework - the four dimensions, signals, calibration
@netlify/axis exports its core functionality for use as a library:
import { run, scoreResults } from "@netlify/axis";
const output = await run({ configPath: "axis.config.json" });
const scored = await scoreResults(output);
console.log(`Average AXIS Result: ${scored.summary.averageAxisScore}`);The package also exports loadConfig, discoverScenarios, setBaseline, compareBaseline, createAgentAdapter, registerAdapter, and the underlying scoring primitives (buildSparseIndex, categorizeInteraction, normalizeTranscript). See src/index.ts for the full surface.
Delivered: scenario runner, four-dimension scoring pipeline, baselines with regression detection, MCP/skills wiring, custom adapter API, built-in adapters for Claude Code, Codex, and Gemini.
Planned:
- Historical trending - score regression detection over time
- AXIS Badge - embeddable score badge for READMEs
- Configurable judge - separate adapter/model for scoring, independent of the agent under test
- Score thresholds - CI gating with configurable pass/fail thresholds
- Human interruption detection - penalize agent requests for human intervention