Skip to content

sixhustle/argus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Argus

Python 3.11+ LangChain Ollama License: MIT

AI-powered code review agent that combines local LLM inference (Ollama) with a knowledge base (ChromaDB) to deliver context-aware reviews. Feed it your team's style guides, best-practice talks, and reference repos -- Argus uses that knowledge to produce reviews tailored to your codebase.

Features

  • Category-based review -- five specialized review passes: Security, Performance, Bug, Code Quality, and DBA
  • Three review interfaces -- GitHub PR comments, cron-based local scanning, and an MCP server for editor integration
  • Multi-source knowledge ingestion -- web pages, YouTube transcripts, PDF/EPUB books, GitHub repos, and text/markdown files
  • Structured output -- markdown reports with per-category scores, inline GitHub PR comments with category labels, and JSON
  • Fully local -- runs entirely on your infrastructure with Ollama; no data leaves your network
  • Docker-ready -- single docker compose up brings up Argus, Ollama, and ChromaDB

Quick Start

# 1. Clone and install
git clone https://github.com/sixhustle/argus.git
cd argus
pip install .

# 2. Start Ollama and pull a model
ollama serve          # in a separate terminal
ollama pull codellama

# 3. Start ChromaDB
docker run -d -p 8000:8000 chromadb/chroma:latest

# 4. Review a file
argus review file ./src/main.py

Installation

Prerequisites

Dependency Version Purpose
Python 3.11+ Runtime
Ollama latest Local LLM inference
ChromaDB 0.5+ Vector store for knowledge base

From source

git clone https://github.com/sixhustle/argus.git
cd argus
pip install .

# Development install with linting and testing tools
pip install ".[dev]"

Configuration

Copy the example environment file and edit as needed:

cp .env.example .env
Variable Default Description
ARGUS_OLLAMA_BASE_URL http://localhost:11434 Ollama server URL
ARGUS_OLLAMA_MODEL codellama Model name for review and embeddings
ARGUS_CHROMA_HOST localhost ChromaDB host
ARGUS_CHROMA_PORT 8000 ChromaDB port
ARGUS_CHROMA_COLLECTION argus ChromaDB collection name
ARGUS_GITHUB_TOKEN GitHub personal access token (for PR reviews)
ARGUS_GITHUB_WEBHOOK_SECRET Webhook secret for GitHub integration
ARGUS_REVIEW_TARGET_PATHS Comma-separated project paths for cron scanning
ARGUS_REVIEW_FILE_EXTENSIONS .py,.ts,.js,.go,.java,.rs File extensions to review

All variables use the ARGUS_ prefix and can be set in .env or as environment variables.

Review Categories

Every review runs five specialized analysis passes by default. Each pass uses a dedicated prompt tuned for its domain:

Category Icon Focus
Security πŸ›‘οΈ SQL injection, XSS, auth flaws, hardcoded secrets, SSRF
Performance ⚑ N+1 queries, memory leaks, O(n²), connection pool, caching
Bug πŸ› NPE, race conditions, off-by-one, error handling, concurrency
Code Quality πŸ“ SOLID, naming, complexity, dead code, design patterns
DBA πŸ—„οΈ Query optimization, indexes, deadlocks, schema design, Elasticsearch, Redis

Use --category to run only specific passes:

# All 5 categories (default)
argus review file src/UserService.java

# Security and DBA only
argus review file src/UserService.java --category security,dba

# Performance only
git diff | argus review diff --category performance

Usage

1. CLI Review

Review a single file:

argus review file path/to/file.py
argus review file path/to/file.py --output json
argus review file path/to/file.py --category security,bug

Review a git diff (pipe from stdin):

git diff | argus review diff
git diff HEAD~3 | argus review diff --output json
git diff | argus review diff --category security,performance

Review an entire project:

argus review project ./my-project
argus review project ./my-project --output json
argus review project ./my-project --category dba

2. GitHub PR Review

Argus posts inline review comments directly on pull requests using the GitHub API.

Via GitHub Actions (recommended):

Add the workflow file at .github/workflows/review.yml (included in this repo). It triggers on pull_request events (opened and synchronize), runs the review, and posts results as a PR comment.

Required repository secrets:

  • GITHUB_TOKEN -- automatically provided by GitHub Actions

The workflow:

  1. Checks out the repository
  2. Spins up an Ollama service container
  3. Pulls the configured model
  4. Generates a diff between the base branch and HEAD
  5. Pipes the diff through argus review diff
  6. Posts the markdown report as a PR comment (updates existing comment on re-runs)

Via webhook:

Start the API server and configure a GitHub webhook pointing to your server:

argus serve api --port 8080

Set the webhook URL to https://your-server:8080/webhook with content type application/json and the pull_request event selected.

3. Cron-Based Local Scanning

The ProjectScanner reviews local project directories on a schedule and writes timestamped markdown reports.

# Set target paths in .env
ARGUS_REVIEW_TARGET_PATHS=/path/to/project-a,/path/to/project-b

# Run manually or add to crontab
argus review project /path/to/project

Example crontab entry for daily scans:

0 2 * * * cd /opt/argus && argus review project /path/to/project --output json > /var/log/argus/$(date +\%F).json

4. MCP Server

Argus exposes five tools via the Model Context Protocol for integration with editors and AI assistants:

Tool Description Optional categories param
review_file Review a single source file ["security", "dba"] etc.
review_diff Review a unified git diff ["performance", "bug"] etc.
review_project Review all eligible files in a directory ["security"] etc.
ingest Add a knowledge source (web, youtube, book, github, text) --
search_knowledge Query the knowledge base --

All review tools accept an optional categories array. When omitted, all five categories run.

Start the MCP server:

argus serve mcp

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "argus": {
      "command": "argus",
      "args": ["serve", "mcp"],
      "env": {
        "ARGUS_OLLAMA_BASE_URL": "http://localhost:11434",
        "ARGUS_OLLAMA_MODEL": "codellama"
      }
    }
  }
}

Claude Code

Add to ~/.claude/settings.json:

{
  "mcpServers": {
    "argus": {
      "command": "/path/to/argus/.venv/bin/argus",
      "args": ["serve", "mcp"]
    }
  }
}

MCP tool call examples

// Review a file (all categories)
{"name": "review_file", "arguments": {
  "file_path": "src/UserService.java",
  "content": "public class UserService { ... }"
}}

// Review a file (security + DBA only)
{"name": "review_file", "arguments": {
  "file_path": "src/UserService.java",
  "content": "public class UserService { ... }",
  "categories": ["security", "dba"]
}}

// Review a diff (performance only)
{"name": "review_diff", "arguments": {
  "diff": "--- a/src/User.java\n+++ b/src/User.java\n@@ ...",
  "categories": ["performance"]
}}

// Ingest a knowledge source
{"name": "ingest", "arguments": {
  "source": "https://docs.spring.io/spring-boot/reference/web/servlet.html",
  "source_type": "web"
}}

// Search the knowledge base
{"name": "search_knowledge", "arguments": {
  "query": "Spring Boot transaction management best practices"
}}

The server communicates over stdio and can be used with any MCP-compatible client.

Knowledge Base

Argus reviews are more useful when backed by domain knowledge. Ingest your team's coding standards, reference material, and best-practice resources to get reviews that understand your project's conventions.

Ingest individual sources

# Web page
argus ingest web "https://docs.python.org/3/library/ast.html"

# YouTube transcript
argus ingest youtube "https://www.youtube.com/watch?v=example"

# PDF or EPUB book
argus ingest book ./books/clean-code.pdf

# GitHub repository
argus ingest github "https://github.com/owner/repo"

# Text or markdown file
argus ingest text ./docs/style-guide.md

Manage sources with sources.yml

Define all knowledge sources in sources.yml at the project root. Run argus ingest sync to ingest only new or unprocessed entries.

sources:
  - source: "https://docs.python.org/3/library/ast.html"
    type: web
    description: "Python AST module docs"

  - source: "https://www.youtube.com/watch?v=example"
    type: youtube
    description: "Clean code talk"

  - source: "./books/clean-code.pdf"
    type: book
    description: "Clean Code by Robert C. Martin"

  - source: "https://github.com/owner/repo"
    type: github
    description: "Reference repository"

  - source: "./docs/style-guide.md"
    type: text
    description: "Team coding style guide"
argus ingest sync

The sync command tracks which sources have already been ingested and skips them, so it is safe to run repeatedly.

Docker

The included docker-compose.yml starts all three services:

# Start Argus + Ollama + ChromaDB
docker compose up -d

# Pull a model into Ollama (first run only)
docker exec -it argus-ollama-1 ollama pull codellama

# Check health
curl http://localhost:8080/health
Service Port Description
argus 8080 API server (FastAPI)
ollama 11434 LLM inference
chroma 8000 Vector store

For GPU acceleration, the compose file includes an NVIDIA GPU reservation on the Ollama service. Remove or adjust the deploy.resources block if running on CPU only.

To pass a GitHub token for PR reviews:

ARGUS_GITHUB_TOKEN=ghp_xxx docker compose up -d

Architecture

argus/
β”œβ”€β”€ core/
β”‚   β”œβ”€β”€ models.py          # Pydantic models: Issue, ReviewResult, ReviewCategory, CategoryScore
β”‚   β”œβ”€β”€ reviewer.py        # Multi-pass CodeReviewer engine (5 category-specific LLM calls)
β”‚   └── reporter.py        # Output formatters with per-category grouping and scores
β”œβ”€β”€ llm/
β”‚   β”œβ”€β”€ chains.py          # LangChain + Ollama review chain construction
β”‚   └── prompts.py         # Category-specific prompt templates (security, perf, bug, quality, dba)
β”œβ”€β”€ knowledge/
β”‚   β”œβ”€β”€ store.py           # ChromaDB vector store wrapper (add, search, clear)
β”‚   β”œβ”€β”€ registry.py        # Source registry tracking ingested sources
β”‚   └── ingest/
β”‚       β”œβ”€β”€ base.py        # BaseIngestor with recursive text splitting
β”‚       β”œβ”€β”€ web.py         # Web page ingestor (BeautifulSoup)
β”‚       β”œβ”€β”€ youtube.py     # YouTube transcript ingestor
β”‚       β”œβ”€β”€ book.py        # PDF (PyMuPDF) and EPUB (ebooklib) ingestor
β”‚       β”œβ”€β”€ github.py      # GitHub repository ingestor
β”‚       └── text.py        # Text/markdown file ingestor
β”œβ”€β”€ integrations/
β”‚   β”œβ”€β”€ github_pr.py       # PRReviewer: fetch diff, review, post comments with category labels
β”‚   β”œβ”€β”€ scanner.py         # ProjectScanner: cron-based local project scanning
β”‚   └── mcp.py             # MCP server with 5 tools and category support (stdio transport)
β”œβ”€β”€ cli.py                 # Typer CLI: review (--category), ingest, serve command groups
└── config.py              # Pydantic Settings with ARGUS_ env prefix

Review pipeline

  1. Input -- source file, git diff, or project directory
  2. Category selection -- determine which categories to review (default: all five)
  3. Multi-pass review -- for each category: a. Context retrieval -- query ChromaDB with category-specific hints b. Prompt construction -- build a category-focused prompt with code and context c. LLM inference -- send to Ollama, receive JSON-structured response d. Parsing -- extract issues with category tag and per-category score
  4. Merge -- combine all category results into a single ReviewResult with category_scores
  5. Output -- format as markdown (grouped by category), GitHub PR comments (with labels), or JSON

Knowledge pipeline

  1. Ingest -- extract text from source (web, YouTube, PDF, etc.)
  2. Split -- chunk text with RecursiveCharacterTextSplitter (1000 chars, 200 overlap)
  3. Embed -- generate embeddings via Ollama
  4. Store -- persist in ChromaDB for similarity search during reviews

Development

# Install with dev dependencies
pip install ".[dev]"

# Run linter
ruff check argus/

# Run tests
pytest

# Format code
ruff format argus/

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors