A comprehensive framework for measuring AI coding assistant effectiveness, with a focus on Claude Code and other terminal-based AI coding assistants.
- Collect and analyze productivity metrics for AI-assisted coding
- Track API costs and calculate ROI
- Monitor code quality and security metrics
- Visualize metrics with Grafana dashboards
- Secure and private data handling
- Python 3.12+
- Docker and Docker Compose (for metrics infrastructure)
- Git repository to analyze
# Clone the repository
git clone https://github.com/yourusername/ai-code-metrics.git
cd ai-code-metrics
# Set up with uv
uv init
uv add prometheus-client gitpython pylint coverage pytest flask tiktoken cryptography
# Start metrics infrastructure
docker-compose up -d
from ai_code_metrics.collectors import MetricsCollector
# Initialize the metrics collector
collector = MetricsCollector()
# Track AI-assisted functions
@collector.track_function(ai_assisted=True)
def my_ai_assisted_function():
# Your AI-assisted code here
pass
# Use in your workflows
my_ai_assisted_function()
The metrics infrastructure uses:
- Prometheus: Time series database for metrics storage
- Grafana: Visualization and dashboards
Access the dashboards at:
- Grafana: http://localhost:3000 (default: admin/admin123)
- Prometheus: http://localhost:9090
See the docs directory for detailed documentation:
MIT
Contributions are welcome! Please feel free to submit a Pull Request.