Skip to content

Halleck45/ast-metrics

Repository files navigation

AST Metrics

No server. No account. One binary.
AST Metrics analyzes your codebase (complexity, architecture, coupling, bus factor...) and runs anywhere.
Drop it in any CI. Works offline. Nothing to install, no SaaS, no data leaves your machine.


CI GitHub Release License: MIT GitHub all releases Go Report Card codecov AST-Metrics report

Documentation | Contributing | Twitter

banner

Analyze your project

Paste a GitHub URL. Get a full report. No install.

Or explore live examples:

spf13/cobra fatih/color gorilla/mux guzzle/psr7 thephpleague/flysystem



Getting Started

Open your terminal and run the following command:

curl -s https://raw.githubusercontent.com/Halleck45/ast-metrics/main/scripts/download.sh|bash
./ast-metrics analyze --report-html=<directory> /path/to/your/code

To install it manually follow the detailed installation instructions.

What you get

Architectural analysis Community detection, coupling, instability — catch design drift early
Code metrics Cyclomatic complexity, maintainability index, lines of code
Activity metrics Commit history, bus factor — know who owns what
Linter Enforce thresholds on coupling, complexity, LOC per method
CI/CD ready GitHub Actions, GitLab CI, any pipeline — exits non-zero on violations
Multiple report formats HTML dashboard, JSON, Markdown, SARIF, OpenMetrics
MCP server Give AI coding agents architectural awareness via Model Context Protocol

Read more in the documentation

Linting your code

Run:

# create a .ast-metrics.yaml config file
ast-metrics init 

# Add ruleset to your config file
ast-metrics ruleset add architecture
ast-metrics ruleset add volume
ast-metrics ruleset list # see the list of available rulesets

# Run the linter
ast-metrics lint

You can declare thresholds in your YAML config (Lines of code per method, Coupling, Maintainability...).

Example:

requirements:
  rules:
    architecture:
      coupling:
        forbidden:
          - from: Controller
            to: Repository
          - from: Repository
            to: Service
      max_afferent_coupling: 10
      max_efferent_coupling: 10
      min_maintainability: 70
    volume:
      max_loc: 1000
      max_logical_loc: 600
      max_loc_by_method: 30
      max_logical_loc_by_method: 20
    complexity:
      max_cyclomatic: 10
    golang:
      no_package_name_in_method: true
      max_nesting: 4
      max_file_size: 1000
      max_files_per_package: 50
      slice_prealloc: true
      ignored_error: true
      context_missing: true
      context_ignored: true

This makes it easy to enforce architecture and quality at scale.

Run ast-metrics ruleset list to see the list of available rulesets. Then ast-metrics ruleset add <ruleset-name> to apply a ruleset to your project.

CI usage

Use the dedicated CI command to run lint and generate all reports in one go:

ast-metrics ci [options] /path/to/your/code

Notes:

  • This command runs the linter first, then generates HTML, Markdown, JSON, OpenMetrics and SARIF reports.
  • If any lint violations are found, the command exits with a non-zero status but still produces the reports.
  • The previous alias analyze --ci is deprecated and will display a warning. Please migrate to ast-metrics ci.

Github Action

Create a .github/workflows/ast-metrics.yml file in your project with the following content:

name: "AST Metrics"
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
        - uses: halleck45/action-ast-metrics@v1

MCP Server for AI agents

AI coding agents (Claude Code, Cursor, Copilot...) read code linearly and lack architectural awareness. AST Metrics can act as an MCP server to give them on-demand access to complexity, coupling, dependency graphs, community detection, risk scoring, and test quality — without reading every file.

ast-metrics mcp .

This starts a stdio MCP server exposing 8 tools:

Tool Purpose
analyze_project High-level overview: languages, complexity, maintainability, top risks
get_file_metrics Detailed metrics for a specific file
find_risky_code Files/classes with highest risk scores
find_complex_code Functions/classes above a complexity threshold
get_dependencies Dependency subgraph around a component
get_coupling Afferent/efferent coupling for a component
get_communities Architectural community detection and metrics
get_test_quality Test isolation, traceability, god tests, orphan classes

Once configured, just talk to your AI agent naturally. For example:

"What are the riskiest files to refactor?" · "Show me the dependencies of the UserService class — what would break if I change it?" · "Are there complex classes with no tests?" · "I need to work on src/billing/invoice.go, what should I know?"

To use it with Claude Code or any MCP-compatible agent, add a .mcp.json at your project root:

{
  "mcpServers": {
    "ast-metrics": {
      "command": "ast-metrics",
      "args": ["mcp", "."]
    }
  }
}

Supported languages

  • Golang any version
  • Python Python 2, Python 3
  • Rust any version
  • PHP <= PHP 8.5
  • 🕛 TypeScript
  • 🕛 Flutter
  • 🕛 Java
  • 🕛 C++
  • 🕛 Ruby

License

AST Metrics is open-source software licensed under the MIT license

Contributing

AST Metrics is an actively evolving project.

We welcome discussions, bug reports, and pull requests.

➡️ Start contributing here

Support the project

If AST Metrics saved you time, a star goes a long way — it helps other developers discover the tool.

Star History Chart

About

See the invisible structure of your code

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Sponsor this project

 

Contributors