Skip to content

netlify/axis

Repository files navigation

AXIS - Agent eXperience Index Score

Full documentation lives at https://axisproject.ai:

AXIS is a synthetic testing framework for measuring how well systems support AI agent interaction. Think Lighthouse, but instead of scoring user experience, AXIS scores agent experience.

Given a scenario, an agent, and a prompt, AXIS runs the agent, captures a full transcript of the interaction, and produces a graded score across four independent dimensions: Goal Achievement, Environment, Service, and Agent.

Why AXIS

The web has Lighthouse. APIs have contract testing. Performance has k6. But there's no standardized way to answer: "How well does my system work when an AI agent tries to use it?"

As agents become a primary interface for interacting with websites, APIs, and developer platforms, the systems they interact with need to be measured and optimized for that experience, just like we optimize for page load time or accessibility.

Quick Start

npm install @netlify/axis

axis.config.json:

{
  "scenarios": "./scenarios",
  "agents": ["claude-code"]
}

scenarios/hello-world.json:

{
  "name": "Hello World",
  "prompt": "Navigate to https://example.com and describe what you see on the page.",
  "rubric": [
    { "check": "Agent visited the target URL", "weight": 0.5 },
    { "check": "Agent provided a description of the page content", "weight": 0.5 }
  ]
}
axis run

AXIS executes the scenario, scores the result, and writes a report to .axis/reports/.

Documentation

Full documentation lives at https://axisproject.ai:

Programmatic API

@netlify/axis exports its core functionality for use as a library:

import { run, scoreResults } from "@netlify/axis";

const output = await run({ configPath: "axis.config.json" });
const scored = await scoreResults(output);

console.log(`Average AXIS Result: ${scored.summary.averageAxisScore}`);

The package also exports loadConfig, discoverScenarios, setBaseline, compareBaseline, createAgentAdapter, registerAdapter, and the underlying scoring primitives (buildSparseIndex, categorizeInteraction, normalizeTranscript). See src/index.ts for the full surface.

Roadmap

Delivered: scenario runner, four-dimension scoring pipeline, baselines with regression detection, MCP/skills wiring, custom adapter API, built-in adapters for Claude Code, Codex, and Gemini.

Planned:

  • Historical trending - score regression detection over time
  • AXIS Badge - embeddable score badge for READMEs
  • Configurable judge - separate adapter/model for scoring, independent of the agent under test
  • Score thresholds - CI gating with configurable pass/fail thresholds
  • Human interruption detection - penalize agent requests for human intervention

About

No description, website, or topics provided.

Resources

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors