AI-powered academic paper reviewer that detects technical and logical errors using LLMs.
uv pip install openaireviewFor development:
git clone https://github.com/ChicagoHAI/OpenAIReview.git
cd OpenAIReview
uv pip install -e .For math-heavy PDFs, install Marker separately to get accurate LaTeX extraction. Without Marker, PDFs are processed with PyMuPDF which cannot extract math symbols correctly.
# Install Marker CLI in an isolated environment (avoids dependency conflicts)
uv tool install marker-pdfMarker is used automatically when available on PATH. For papers with math, we recommend using .tex source or arXiv HTML URLs instead of PDF when possible — these always produce correct output.
First, set your OpenRouter API key (get one at openrouter.ai/keys):
export OPENROUTER_API_KEY=your_key_hereOr create a .env file in your working directory:
OPENROUTER_API_KEY=your_key_here
Then review a paper and visualize results:
# Review a local file
openaireview review paper.pdf
# Or review directly from an arXiv URL
openaireview review https://arxiv.org/html/2310.06825
# Visualize results
openaireview serve
# Open http://localhost:8080Review an academic paper for technical and logical issues. Accepts a local file path or an arXiv URL.
| Option | Default | Description |
|---|---|---|
--method |
incremental |
Review method: zero_shot, local, incremental, incremental_full |
--model |
anthropic/claude-opus-4-6 |
Model to use |
--output-dir |
./review_results |
Directory for output JSON files |
--name |
(from filename) | Paper slug name |
Start a local visualization server to browse review results.
| Option | Default | Description |
|---|---|---|
--results-dir |
./review_results |
Directory containing result JSON files |
--port |
8080 |
Server port |
- PDF (
.pdf) — uses Marker for high-quality extraction with LaTeX math; falls back to PyMuPDF if Marker is not installed - DOCX (
.docx) — via python-docx - LaTeX (
.tex) — plain text with title extraction from\title{} - Text/Markdown (
.txt,.md) — plain text - arXiv HTML — fetch and parse directly from
https://arxiv.org/html/<id>orhttps://arxiv.org/abs/<id>
| Variable | Default | Description |
|---|---|---|
OPENROUTER_API_KEY |
(required) | Your OpenRouter API key |
MODEL |
anthropic/claude-opus-4-6 |
Default model |
These can be set as environment variables or in a .env file. See .env.example for a template.
All models available on OpenRouter are supported — use any model ID via --model. The following models have built-in pricing for accurate cost tracking in the visualization:
| Model | Input ($/1M tokens) | Output ($/1M tokens) |
|---|---|---|
anthropic/claude-opus-4-6 |
$5.00 | $25.00 |
anthropic/claude-opus-4-5 |
$5.00 | $25.00 |
openai/gpt-5.2-pro |
$21.00 | $168.00 |
google/gemini-3.1-pro-preview |
$2.00 | $12.00 |
For models not listed above, a default rate of $5.00/$25.00 per 1M tokens is used.
- zero_shot — single prompt asking the model to find all issues
- local — deep-checks each chunk with surrounding window context (no filtering)
- incremental — sequential processing with running summary, then consolidation
- incremental_full — same as incremental but returns all comments before consolidation
Benchmark data and experiment scripts are in benchmarks/. See benchmarks/REPORT.md for results.