Skip to content

Humiris/ai-visibility

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Codiris

AI Visibility

See how AI sees your brand.
Scores, platform breakdowns, competitive benchmarks — for any domain, from your terminal.

npm MIT License Node 14+ Zero dependencies


npx ai-visibility stripe.com
  Stripe  Financial Technology

  AI Visibility Score  ███████████░░░░  52/100  C+
  AI Readiness Score   █████████████░░  58/100

  Platform Visibility

  ◉ ChatGPT        █████████░░░░░░  55  ↑
  ◈ Perplexity     ███████░░░░░░░░  48  →
  ◆ Google AI      ████████░░░░░░░  52  ↑
  ● Claude         █████████░░░░░░  58  →
  ◎ Copilot        ██████░░░░░░░░░  45  →

  Competitive Landscape

  ▸ Stripe                   ████████░░░░░░░ 52
    PayPal                   ██████░░░░░░░░░ 48
    Square                   █████░░░░░░░░░░ 37

What it does

When someone asks ChatGPT "what's the best payment API?" — does your brand show up?

AI Visibility analyzes how ChatGPT, Claude, Perplexity, Google AI, and Copilot see, cite, and recommend any brand. One command gives you:

  • AI Visibility Score — how often AI platforms mention and recommend you (0–100)
  • AI Readiness Score — how well your site is structured for AI consumption (0–100)
  • Per-platform breakdown — individual scores for each major AI platform
  • Competitive benchmarks — where you rank vs. direct competitors
  • Strengths & gaps — what's working, what's missing
  • Actionable recommendations — concrete steps to improve AI visibility

Built for developers, AI agents, marketing teams, and CI/CD pipelines.


Quick start

# No install needed — run directly
npx ai-visibility stripe.com

# Or install globally
npm install -g ai-visibility

That's it. No API key, no config, no sign-up.


Commands

analyze — Deep-dive a single brand

ai-visibility analyze nike.com
ai-visibility analyze shopify.com --json     # structured JSON output
ai-visibility analyze tesla.com -o report.json  # save to file

The analyze command is the default — you can skip it:

ai-visibility stripe.com         # same as: ai-visibility analyze stripe.com

compare — Benchmark competitors side-by-side

ai-visibility compare stripe.com square.com adyen.com paypal.com
  Brand                     Visibility     Readiness      Grade
  ─────────────────────────────────────────────────────────────────
  Stripe                    ████████░░░░░░ ████████████░░ C+
  PayPal                    ██████░░░░░░░░ ██████████░░░░ C
  Square                    █████░░░░░░░░░ ████████░░░░░░ C-
  Adyen                     ████░░░░░░░░░░ ██████░░░░░░░░ D+

  Platform breakdown:

  Platform        Stripe         PayPal         Square         Adyen
  ─────────────────────────────────────────────────────────────────────
  ChatGPT         55             48             35             28
  Perplexity      48             42             32             25
  Google AI       52             50             38             30
  Claude          58             45             40             22
  Copilot         45             40             30             20

For AI Agents

This CLI is designed to be called by AI coding agents (Claude Code, Cursor, Codex, Windsurf). The --json flag returns structured, parseable output with zero noise.

npx ai-visibility analyze mysite.com --json
Using with Claude Code

In a Claude Code conversation, just ask:

"Check the AI visibility for stripe.com"

Claude Code will run:

npx ai-visibility analyze stripe.com --json

And get back structured data it can reason over — scores, platform breakdowns, gaps, recommendations.

To pre-approve the tool, add to your .claude/settings.json:

{
  "permissions": {
    "allow": [
      "Bash(npx ai-visibility*)"
    ]
  }
}
Using with Cursor / Codex / Windsurf

Any AI coding agent that can run shell commands can use this CLI. The --json flag ensures clean, parseable output:

# Get visibility score as a number
npx ai-visibility analyze stripe.com --json | jq '.aiVisibilityScore'

# Get platform scores as CSV
npx ai-visibility analyze stripe.com --json | \
  jq -r '.platformVisibility[] | [.platform, .score, .trend] | @csv'

# Extract top recommendation
npx ai-visibility analyze stripe.com --json | \
  jq '.recommendations[0]'
MCP tool definition

If you're building an MCP server that wraps this CLI:

{
  "name": "ai_visibility_analyze",
  "description": "Analyze how AI platforms (ChatGPT, Claude, Perplexity, Google AI, Copilot) see and recommend a brand. Returns visibility scores, platform breakdowns, competitive benchmarks, and recommendations.",
  "input_schema": {
    "type": "object",
    "properties": {
      "domain": {
        "type": "string",
        "description": "Domain to analyze (e.g. stripe.com, nike.com)"
      }
    },
    "required": ["domain"]
  }
}

JSON Schema

The --json output returns a complete analysis object:

View full schema
{
  "brand": "Stripe",
  "domain": "stripe.com",
  "industry": "Financial Technology",
  "tagline": "Financial infrastructure for the internet",
  "brandColor": "#635BFF",

  // Scores
  "aiVisibilityScore": 52,           // 0-100 — AI platform presence
  "aiReadinessScore": 58,            // 0-100 — technical AI readiness
  "overallGrade": "C+",              // A+ to F

  // Per-platform scores
  "platformVisibility": [
    { "platform": "ChatGPT", "score": 55, "trend": "Growing", "details": "..." },
    { "platform": "Perplexity", "score": 48, "trend": "Stable", "details": "..." },
    { "platform": "Google AI", "score": 52, "trend": "Growing", "details": "..." },
    { "platform": "Claude", "score": 58, "trend": "Stable", "details": "..." },
    { "platform": "Copilot", "score": 45, "trend": "Stable", "details": "..." }
  ],

  // Competitive landscape
  "competitors": [
    { "name": "Stripe", "score": 52, "isTarget": true },
    { "name": "PayPal", "score": 48, "isTarget": false },
    { "name": "Square", "score": 37, "isTarget": false }
  ],

  // Analysis
  "topics": ["payments", "api", "fintech", "developer-tools"],
  "strengths": ["Strong API documentation", "Active developer community"],
  "gaps": ["No MCP server", "Limited structured data markup"],
  "summary": "...",
  "detailedAnalysis": "...",

  // Recommendations
  "recommendations": [
    { "title": "Deploy MCP Server", "description": "...", "impact": "high", "effort": "medium" }
  ],
  "frictionIssues": [
    { "issue": "No structured API for product discovery", "severity": "high", "effort": "low", "fix": "..." }
  ],
  "missingIntegrations": [
    { "name": "MCP Server", "description": "...", "priority": "high" }
  ],

  // Prompts & discovery
  "samplePrompts": [
    { "prompt": "What's the best payment API?", "mentioned": true, "snippet": "..." }
  ],
  "discoveryPrompts": ["best payment processing API", "stripe vs paypal", "..."],
  "whyAiMatters": ["40% of developer tool discovery now starts with AI assistants", "..."]
}

CI/CD Integration

Track your AI visibility score over time:

# .github/workflows/ai-visibility.yml
name: AI Visibility Check
on:
  schedule:
    - cron: '0 9 * * 1'  # Every Monday 9am
  workflow_dispatch:

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - name: Analyze AI Visibility
        run: |
          RESULT=$(npx ai-visibility analyze ${{ vars.DOMAIN }} --json)
          SCORE=$(echo "$RESULT" | jq '.aiVisibilityScore')
          GRADE=$(echo "$RESULT" | jq -r '.overallGrade')
          echo "### AI Visibility: $GRADE ($SCORE/100)" >> $GITHUB_STEP_SUMMARY
          echo "$RESULT" | jq '.platformVisibility[] | "- \(.platform): \(.score)/100 \(.trend)"' -r >> $GITHUB_STEP_SUMMARY

      - name: Alert on low score
        run: |
          SCORE=$(npx ai-visibility analyze ${{ vars.DOMAIN }} --json | jq '.aiVisibilityScore')
          if [ "$SCORE" -lt 20 ]; then
            echo "::warning::AI Visibility score dropped below 20"
          fi

Options

Flag Description
--json Structured JSON output (for agents & pipelines)
--quiet, -q Suppress progress spinners
--output <file>, -o Save JSON report to file
--help, -h Show help
--version, -v Show version
Environment Variable Description Default
CODIRIS_API API endpoint https://codiris.app

How it works

  ┌─────────────┐     ┌──────────────────┐     ┌────────────────────┐
  │  Your CLI   │────▶│  Codiris API     │────▶│  AI Analysis       │
  │  terminal   │     │  /api/analyze    │     │  (multi-platform)  │
  └─────────────┘     └──────────────────┘     └────────────────────┘
                              │
                              ▼
                    ┌──────────────────┐
                    │  Site crawl +    │
                    │  brand detection │
                    │  + AI probing    │
                    └──────────────────┘
  1. Crawls the domain — extracts brand signals, structured data, technical capabilities
  2. Probes AI platforms — evaluates how ChatGPT, Claude, Perplexity, Google AI, and Copilot reference the brand
  3. Benchmarks competitors — identifies direct competitors and compares visibility scores
  4. Generates recommendations — actionable steps to improve AI discoverability

Interactive Report

Want the full visual experience with charts, PDF export, and shareable links?

https://codiris.app/ai-visibility?url=stripe.com

License

MIT — Made by Codiris

About

See how AI sees your brand. Scores, platform breakdowns, competitive benchmarks — for any domain, from your terminal.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors