Catch risky analytics SQL before it runs and explain what complex queries actually do.
querylens scans SQL repositories, ranks the worst files first, reviews pull requests for regressions, and highlights reliability and cost risks such as full table scans, wide joins, missing filters, and expensive query patterns before they merge and hit production.
Input SQL:
SELECT
c.customer_id,
SUM(o.amount) AS revenue
FROM customers c
JOIN orders o
ON c.id = o.customer_id
GROUP BY c.customer_idCommand:
cargo run -- lineage examples/revenue.sqlOutput:
examples/revenue.sql
Projections:
revenue
└─ SUM(orders.amount)
o.customer_id
└─ orders.customer_id
Preferred (PyPI):
pip install querylens
querylens --helpGitHub binaries / Homebrew:
- Will be available once GitHub releases for
querylensare published; use the PyPI build for now.
Build from source:
cargo build
# enable AWS Bedrock provider if needed
# cargo build --features bedrockScan estimates (optional)
- Athena (post-run calibration):
querylens explain --athena-query-execution-id <id> --athena-region us-east-1 --file your.sqlusesaws athena get-query-executionto pullDataScannedInBytesfrom a past run. If AWS CLI is missing or fails, we fall back to “unknown”. - Manual override:
--scan-tb 1.5or--scan-bytes 1500000000000. - Table stats file (offline, no cloud calls):
--stats-file stats.json - Cost-only workflow (no LLM):
querylens cost --file your.sql --stats-file stats.json - Collect Postgres stats into a JSON file:
querylens collect-stats --engine postgres --out stats.json --database-url "$DATABASE_URL"
Example stats.json:
{
"tables": {
"athena.sales": { "total_bytes": 1200000000000, "row_count": 3500000000, "partition_columns": ["ds"], "format": "parquet" },
"athena.customers": { "total_bytes": 8000000000 }
}
}Heuristics:
- No WHERE → assume full scan.
- Filters on non-partition columns → ~0.7 of table size.
- Equality/range on partition column → small slice (2–5% by default, or days/partitions if provided).
- Sum per-table bytes to show “Estimated scan” and PR cost deltas.
| Feature | Description |
|---|---|
| Static SQL checks | Detect risky patterns (SELECT *, possible Cartesian joins, wildcard LIKE, etc.) |
| Column lineage | Trace projection, filter, and join lineage |
| Query explanation | Summarize tables, joins, aggregations, and likely query meaning |
| Table extraction | List tables used by a query |
| Folder scanning | Analyze a directory of SQL files |
| dbt DAG audit | Detect structural hotspots, layer violations, fan-in/fan-out risk, and domain coupling from manifest.json |
| dbt PR DAG review | Compare two manifests and fail CI on complexity regressions |
| Rule controls | Disable rules or override severity by rule_id |
| Athena mode | Extra heuristics for partition/cost patterns |
| CI thresholds | Fail on `low |
SQL pipelines grow quickly and are hard to review.
When a metric looks wrong, teams need to:
- trace where output columns come from
- catch risky query patterns before they hit production
- understand query intent quickly
querylens helps with deterministic checks and optional LLM explanations.
Example query:
SELECT *
FROM orders o
JOIN customers cCommand:
cargo run -- --file examples/bad_join.sql --static-onlyExpected findings include:
SELECT *- possible Cartesian join (
JOINwithoutON/USING)
Example subquery pattern:
cargo run -- --file examples/subquery.sql --static-onlyExpected suggestion includes:
- consider replacing
IN (SELECT ...)withJOINorEXISTS
dbt builds and runs transformation pipelines.
querylens analyzes SQL itself:
- detect risky query patterns
- trace lineage in query text
- explain query logic
They complement each other: dbt for orchestration/modeling, querylens for query inspection.
Tagging v* triggers .github/workflows/release.yml and publishes:
querylens-macos-aarch64.tar.gzquerylens-linux-x86_64.tar.gzSHA256SUMS
Create a release tag:
git tag v0.1.2
git push origin v0.1.2Use the formula template at:
packaging/homebrew/querylens.rb
For each release:
- Set
version(withoutv). - Fill
__SHA256_MACOS_AARCH64__and__SHA256_LINUX_X86_64__fromSHA256SUMS. - Commit the formula in your tap repo (for example
kraftaa/homebrew-tap) asFormula/querylens.rb. - Users install with:
brew install kraftaa/tap/querylensNote: all analysis commands are static-only; they never execute your SQL against a database.
cargo run -- lineage <file.sql>
cargo run -- lineage <file.sql> --column revenue
cargo run -- risk <file.sql>
cargo run -- risk <file.sql> --summary-only
cargo run -- guard <file.sql> --max-risk high --deny-rule CROSS_JOIN
cargo run -- simulate <file.sql> --limit 100
cargo run -- tables <file.sql>
cargo run -- explain <file.sql>
cargo run -- analyze <dir> --glob "*.sql"
cargo run -- analyze <dir> --glob "*.sql" --changed-only
cargo run -- analyze <dir> --glob "*.sql" --changed-only --changed-base main
cargo run -- analyze <dir> --glob "*.sql" --top 10
cargo run -- analyze <dir> --glob "*.sql" --top 10 --verbose
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql"
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql" --ci
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql" --markdown
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql" --cost-diff --stats-file stats.json
cargo run -- dbt-audit target/manifest.json
cargo run -- dbt-audit target/manifest.json --fan-in-threshold 6 --fan-out-threshold 8 --domain-coupling-threshold 5 --hotspot-threshold 14
cargo run -- dbt-pr-review --base old_manifest.json --new new_manifest.json
cargo run -- cost --file models/orders.sql --engine athena --stats-file stats.json
cargo run -- collect-stats --engine postgres --out stats.json --database-url "$DATABASE_URL"cargo run -- dbt-audit target/manifest.jsondbt-audit is deterministic and graph-based. It reports actionable findings such as:
- marts depending on marts
- forbidden layer edges (for example marts depending directly on staging, or staging depending on marts)
- extreme fan-in / fan-out models
- domain coupling edges above threshold
- structural hotspots ranked by combined graph risk signals
PR comparison mode:
cargo run -- dbt-pr-review --base old_manifest.json --new new_manifest.jsonThis reports introduced/resolved findings and whether DAG complexity regressed.
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql"Example output:
SQL Inspect PR Review
Base: main
Head: HEAD
PR status: PASS
No new SQL risk regressions detected.
1 changed SQL file
0 new HIGH-risk queries
0 query lost partition filter
0 ORDER BY without LIMIT regressions
0 possible join amplification regressions
0 files increased estimated scan cost
File: models/example.sql
Previous risk: HIGH
Current risk: HIGH
Risk trend: unchanged
Still risky because:
- SELECT_STAR
Estimated scan: unknown -> unknown
Estimated scan delta: unknown
Compact CI mode:
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql" --ciPR status: PASS
Changed SQL files: 1
New HIGH-risk queries: 0
Partition filter regressions: 0
ORDER BY without LIMIT regressions: 0
Join amplification regressions: 0
Files with higher estimated scan: 0
Markdown mode for PR comments:
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql" --markdownCost regression mode:
cargo run -- pr-review --base main --head HEAD --dir models --glob "*.sql" --cost-diff --stats-file stats.jsonExample output:
SQL Cost Regression
1 changed SQL file
File: models/revenue.sql
Estimated scan change:
Before: 22 GB
After: 1.40 TB
Increase: 63.6x
Cost regression: HIGH
Reason:
Filter removed: orders.order_date >= DATE '2026-01-01'
Recommendation:
Restore a selective WHERE or partition predicate.
analyze now highlights severity shape and hotspots, not only rule counts.
Useful flags:
--top <N>number of hotspot files to display (default5)--changed-onlylimit to changed files in working tree/staging--changed-base <ref>(with--changed-only) limit to committed diff vs ref (for examplemain)
Example:
SQL Inspect Report
Scope: current selection
Analyzed 225 SQL files
Top risks:
1. 37 HIGH-risk files
2. 117 files likely scan full tables
3. 18 files have complex multi-join patterns
4. 12 files contain CROSS JOIN or likely Cartesian behavior
Severity shape:
HIGH: 37 files
MEDIUM: 88 files
LOW: 100 files
Most severe files:
- models/a.sql HIGH SELECT_STAR, FULL_TABLE_SCAN_LIKELY
- models/b.sql HIGH WIDE_JOIN_GRAPH, MISSING_WHERE
Verbose mode:
cargo run -- analyze . --glob "*.sql" --top 10 --verbosecargo run -- risk examples/bad_join.sql
cargo run -- risk examples/bad_join.sql --summary-only
cargo run -- risk examples/bad_join.sql --scan-tb 2.3
cargo run -- risk examples/bad_join.sql --scan-bytes 2300000000000Example output:
examples/bad_join.sql
Risk score: HIGH
Reasons:
- select star: SELECT *
- missing where: No WHERE clause
Estimated scan: 2.30 TB
Compact risk summary:
SQL Inspect Risk
File: examples/bad_join.sql
Risk: HIGH
Estimated scan: unknown
Top reasons:
- select star: SELECT *
- missing where: No WHERE clause
- full table scan likely: Likely full table scan
Use one of:
--scan-tb <value>for TB units directly--scan-bytes <value>to auto-convert bytes to TB--athena-query-execution-id <id>to fetchDataScannedInBytesfrom Athena viaawsCLI--athena-region <region>optional region override for Athena lookup--stats-file <path.json>to estimate scan bytes from table-level stats
querylens cost --file examples/revenue.sql --engine athena --stats-file stats.json- Uses static analysis + optional stats to estimate scanned bytes and Athena cost (uses ~$5/TB).
- Accepts the same overrides as risk/explain:
--scan-tb,--scan-bytes,--athena-query-execution-id. - If stats or AWS bytes are missing, prints
estimated_scan: unknownbut still shows risk signals.
collect-stats can create a stats file from Postgres:
querylens collect-stats --engine postgres --out stats.json --database-url "$DATABASE_URL"Output looks like:
{
"tables": {
"orders": { "total_bytes": 1000000000000, "row_count": 12000000 },
"customers": { "total_bytes": 200000000000 }
}
}cargo run -- explain examples/query.sqlExample output:
Query explanation
Meaning: total amount per id, created at, email
Tables: orders, customers, order_items
Join: customers.id = orders.customer_id; order_items.order_id = orders.id
Aggregation: SUM(order_items.quantity * order_items.unit_price) AS total_amount
cargo run -- guard examples/bad_join.sql --max-risk high --deny-rule CROSS_JOIN --deny-rule FULL_TABLE_SCAN_LIKELYExample output:
SQL Inspect Guard
Policy: default
Status: FAIL
Risk: HIGH
Blocking violations
- FULL_TABLE_SCAN_LIKELY
- SELECT_STAR
Why blocked
This query shape is likely to scan most rows and can materially increase cost
Exit code: 2
Exit code is 2 when blocked, so this works directly in CI.
cargo run -- simulate examples/query.sql --limit 100cargo run -- lineage examples/revenue.sql --column revenueExample output:
examples/revenue.sql
Projections:
revenue
└─ SUM(orders.amount)
Provide one of:
--sql "<query>"--file <path>--dir <path>
Optional:
--provider openai|bedrock|local--dialect generic|athena--static-only--fail-on low|medium|high--glob "*.sql"--config querylens.toml--json
OpenAI example:
export OPENAI_API_KEY="..."
export OPENAI_MODEL="gpt-4.1-mini"
cargo run -- --provider openai --file examples/query.sqlBedrock example:
export AWS_REGION="us-east-1"
export BEDROCK_MODEL_ID="anthropic.claude-3-5-sonnet-20241022-v2:0"
cargo run -- --provider bedrock --file examples/query.sqlLocal OpenAI-compatible server example:
export LOCAL_LLM_BASE_URL="http://127.0.0.1:8080"
export LOCAL_LLM_MODEL="llama_instruct.gguf"
cargo run -- --provider local --file examples/query.sql --jsonCreate querylens.toml:
dialect = "athena"
fail_on = "high"
glob = "*.sql"
suggest_limit_for_exploratory = true
static_only = false
[rules.SELECT_STAR]
enabled = true
severity = "high"
[rules.MISSING_WHERE]
enabled = true
severity = "medium"Rule controls:
enabled = falsedisables a finding byrule_idseverity = "low|medium|high"overrides severity
Example:
[rules.SELECT_STAR]
enabled = false
[rules.MISSING_WHERE]
severity = "low"Fail a build when risky SQL is found:
cargo run -- --dir models --dialect athena --fail-on highOr subcommand mode:
cargo run -- analyze models --glob "*.sql"
cargo run -- analyze models --glob "*.sql" --changed-onlyAdd suppression comments directly in SQL when needed:
-- querylens: disable=SELECT_STAR,MISSING_WHERE
SELECT * FROM some_small_reference_tableSuppression applies to matching rule_id values for that file.
Ready-to-run examples:
examples/query.sqlexamples/revenue.sqlexamples/bad_join.sqlexamples/subquery.sqlexamples/request_attachment_links.sqlexamples/order_milestone_pricing.sqlexamples/entity_status_snapshot.sql
querylens/
Cargo.toml
src/
main.rs
analyzer.rs
insights.rs
config.rs
prompt.rs
providers/
openai.rs
bedrock.rs
local.rs
examples/
- missing env vars: set required provider vars
- unexpected model JSON shape: run with
--jsonand inspect response - no secrets in repo:
.envand.env.*are gitignored