AI-powered code review agent that combines local LLM inference (Ollama) with a knowledge base (ChromaDB) to deliver context-aware reviews. Feed it your team's style guides, best-practice talks, and reference repos -- Argus uses that knowledge to produce reviews tailored to your codebase.
- Category-based review -- five specialized review passes: Security, Performance, Bug, Code Quality, and DBA
- Three review interfaces -- GitHub PR comments, cron-based local scanning, and an MCP server for editor integration
- Multi-source knowledge ingestion -- web pages, YouTube transcripts, PDF/EPUB books, GitHub repos, and text/markdown files
- Structured output -- markdown reports with per-category scores, inline GitHub PR comments with category labels, and JSON
- Fully local -- runs entirely on your infrastructure with Ollama; no data leaves your network
- Docker-ready -- single
docker compose upbrings up Argus, Ollama, and ChromaDB
# 1. Clone and install
git clone https://github.com/sixhustle/argus.git
cd argus
pip install .
# 2. Start Ollama and pull a model
ollama serve # in a separate terminal
ollama pull codellama
# 3. Start ChromaDB
docker run -d -p 8000:8000 chromadb/chroma:latest
# 4. Review a file
argus review file ./src/main.py| Dependency | Version | Purpose |
|---|---|---|
| Python | 3.11+ | Runtime |
| Ollama | latest | Local LLM inference |
| ChromaDB | 0.5+ | Vector store for knowledge base |
git clone https://github.com/sixhustle/argus.git
cd argus
pip install .
# Development install with linting and testing tools
pip install ".[dev]"Copy the example environment file and edit as needed:
cp .env.example .env| Variable | Default | Description |
|---|---|---|
ARGUS_OLLAMA_BASE_URL |
http://localhost:11434 |
Ollama server URL |
ARGUS_OLLAMA_MODEL |
codellama |
Model name for review and embeddings |
ARGUS_CHROMA_HOST |
localhost |
ChromaDB host |
ARGUS_CHROMA_PORT |
8000 |
ChromaDB port |
ARGUS_CHROMA_COLLECTION |
argus |
ChromaDB collection name |
ARGUS_GITHUB_TOKEN |
GitHub personal access token (for PR reviews) | |
ARGUS_GITHUB_WEBHOOK_SECRET |
Webhook secret for GitHub integration | |
ARGUS_REVIEW_TARGET_PATHS |
Comma-separated project paths for cron scanning | |
ARGUS_REVIEW_FILE_EXTENSIONS |
.py,.ts,.js,.go,.java,.rs |
File extensions to review |
All variables use the ARGUS_ prefix and can be set in .env or as environment variables.
Every review runs five specialized analysis passes by default. Each pass uses a dedicated prompt tuned for its domain:
| Category | Icon | Focus |
|---|---|---|
| Security | π‘οΈ | SQL injection, XSS, auth flaws, hardcoded secrets, SSRF |
| Performance | β‘ | N+1 queries, memory leaks, O(nΒ²), connection pool, caching |
| Bug | π | NPE, race conditions, off-by-one, error handling, concurrency |
| Code Quality | π | SOLID, naming, complexity, dead code, design patterns |
| DBA | ποΈ | Query optimization, indexes, deadlocks, schema design, Elasticsearch, Redis |
Use --category to run only specific passes:
# All 5 categories (default)
argus review file src/UserService.java
# Security and DBA only
argus review file src/UserService.java --category security,dba
# Performance only
git diff | argus review diff --category performanceReview a single file:
argus review file path/to/file.py
argus review file path/to/file.py --output json
argus review file path/to/file.py --category security,bugReview a git diff (pipe from stdin):
git diff | argus review diff
git diff HEAD~3 | argus review diff --output json
git diff | argus review diff --category security,performanceReview an entire project:
argus review project ./my-project
argus review project ./my-project --output json
argus review project ./my-project --category dbaArgus posts inline review comments directly on pull requests using the GitHub API.
Via GitHub Actions (recommended):
Add the workflow file at .github/workflows/review.yml (included in this repo). It triggers on pull_request events (opened and synchronize), runs the review, and posts results as a PR comment.
Required repository secrets:
GITHUB_TOKEN-- automatically provided by GitHub Actions
The workflow:
- Checks out the repository
- Spins up an Ollama service container
- Pulls the configured model
- Generates a diff between the base branch and HEAD
- Pipes the diff through
argus review diff - Posts the markdown report as a PR comment (updates existing comment on re-runs)
Via webhook:
Start the API server and configure a GitHub webhook pointing to your server:
argus serve api --port 8080Set the webhook URL to https://your-server:8080/webhook with content type application/json and the pull_request event selected.
The ProjectScanner reviews local project directories on a schedule and writes timestamped markdown reports.
# Set target paths in .env
ARGUS_REVIEW_TARGET_PATHS=/path/to/project-a,/path/to/project-b
# Run manually or add to crontab
argus review project /path/to/projectExample crontab entry for daily scans:
0 2 * * * cd /opt/argus && argus review project /path/to/project --output json > /var/log/argus/$(date +\%F).jsonArgus exposes five tools via the Model Context Protocol for integration with editors and AI assistants:
| Tool | Description | Optional categories param |
|---|---|---|
review_file |
Review a single source file | ["security", "dba"] etc. |
review_diff |
Review a unified git diff | ["performance", "bug"] etc. |
review_project |
Review all eligible files in a directory | ["security"] etc. |
ingest |
Add a knowledge source (web, youtube, book, github, text) | -- |
search_knowledge |
Query the knowledge base | -- |
All review tools accept an optional categories array. When omitted, all five categories run.
Start the MCP server:
argus serve mcpAdd to claude_desktop_config.json:
{
"mcpServers": {
"argus": {
"command": "argus",
"args": ["serve", "mcp"],
"env": {
"ARGUS_OLLAMA_BASE_URL": "http://localhost:11434",
"ARGUS_OLLAMA_MODEL": "codellama"
}
}
}
}Add to ~/.claude/settings.json:
{
"mcpServers": {
"argus": {
"command": "/path/to/argus/.venv/bin/argus",
"args": ["serve", "mcp"]
}
}
}The server communicates over stdio and can be used with any MCP-compatible client.
Argus reviews are more useful when backed by domain knowledge. Ingest your team's coding standards, reference material, and best-practice resources to get reviews that understand your project's conventions.
# Web page
argus ingest web "https://docs.python.org/3/library/ast.html"
# YouTube transcript
argus ingest youtube "https://www.youtube.com/watch?v=example"
# PDF or EPUB book
argus ingest book ./books/clean-code.pdf
# GitHub repository
argus ingest github "https://github.com/owner/repo"
# Text or markdown file
argus ingest text ./docs/style-guide.mdDefine all knowledge sources in sources.yml at the project root. Run argus ingest sync to ingest only new or unprocessed entries.
sources:
- source: "https://docs.python.org/3/library/ast.html"
type: web
description: "Python AST module docs"
- source: "https://www.youtube.com/watch?v=example"
type: youtube
description: "Clean code talk"
- source: "./books/clean-code.pdf"
type: book
description: "Clean Code by Robert C. Martin"
- source: "https://github.com/owner/repo"
type: github
description: "Reference repository"
- source: "./docs/style-guide.md"
type: text
description: "Team coding style guide"argus ingest syncThe sync command tracks which sources have already been ingested and skips them, so it is safe to run repeatedly.
The included docker-compose.yml starts all three services:
# Start Argus + Ollama + ChromaDB
docker compose up -d
# Pull a model into Ollama (first run only)
docker exec -it argus-ollama-1 ollama pull codellama
# Check health
curl http://localhost:8080/health| Service | Port | Description |
|---|---|---|
| argus | 8080 | API server (FastAPI) |
| ollama | 11434 | LLM inference |
| chroma | 8000 | Vector store |
For GPU acceleration, the compose file includes an NVIDIA GPU reservation on the Ollama service. Remove or adjust the deploy.resources block if running on CPU only.
To pass a GitHub token for PR reviews:
ARGUS_GITHUB_TOKEN=ghp_xxx docker compose up -dargus/
βββ core/
β βββ models.py # Pydantic models: Issue, ReviewResult, ReviewCategory, CategoryScore
β βββ reviewer.py # Multi-pass CodeReviewer engine (5 category-specific LLM calls)
β βββ reporter.py # Output formatters with per-category grouping and scores
βββ llm/
β βββ chains.py # LangChain + Ollama review chain construction
β βββ prompts.py # Category-specific prompt templates (security, perf, bug, quality, dba)
βββ knowledge/
β βββ store.py # ChromaDB vector store wrapper (add, search, clear)
β βββ registry.py # Source registry tracking ingested sources
β βββ ingest/
β βββ base.py # BaseIngestor with recursive text splitting
β βββ web.py # Web page ingestor (BeautifulSoup)
β βββ youtube.py # YouTube transcript ingestor
β βββ book.py # PDF (PyMuPDF) and EPUB (ebooklib) ingestor
β βββ github.py # GitHub repository ingestor
β βββ text.py # Text/markdown file ingestor
βββ integrations/
β βββ github_pr.py # PRReviewer: fetch diff, review, post comments with category labels
β βββ scanner.py # ProjectScanner: cron-based local project scanning
β βββ mcp.py # MCP server with 5 tools and category support (stdio transport)
βββ cli.py # Typer CLI: review (--category), ingest, serve command groups
βββ config.py # Pydantic Settings with ARGUS_ env prefix
- Input -- source file, git diff, or project directory
- Category selection -- determine which categories to review (default: all five)
- Multi-pass review -- for each category: a. Context retrieval -- query ChromaDB with category-specific hints b. Prompt construction -- build a category-focused prompt with code and context c. LLM inference -- send to Ollama, receive JSON-structured response d. Parsing -- extract issues with category tag and per-category score
- Merge -- combine all category results into a single
ReviewResultwithcategory_scores - Output -- format as markdown (grouped by category), GitHub PR comments (with labels), or JSON
- Ingest -- extract text from source (web, YouTube, PDF, etc.)
- Split -- chunk text with
RecursiveCharacterTextSplitter(1000 chars, 200 overlap) - Embed -- generate embeddings via Ollama
- Store -- persist in ChromaDB for similarity search during reviews
# Install with dev dependencies
pip install ".[dev]"
# Run linter
ruff check argus/
# Run tests
pytest
# Format code
ruff format argus/MIT