Safe multi-database access for AI agents.
Quickstart · How it works · Databases · MCP · Docs
AI agents are getting access to databases, APIs, and tools. Nobody's checking what they actually do with that access.
faz sits between your agent and your databases. Every query passes through a 5-stage safety pipeline — RBAC, AST analysis, injection detection, and guardrails — before anything gets executed. Your agent talks to faz. faz talks to your databases. Nothing gets through without being inspected.
┌─────────────────────────┐
Claude, Cursor, │ faz │
or any MCP client ───► │ auth · safety · audit │ ───► 14 databases
└─────────────────────────┘
Install faz and generate a config file:
pip install faz-core
faz init # creates faz.yaml + .faz/ directoryWindows: if
fazis not recognized, the PythonScriptsdirectory isn't on yourPATH. Either install inside a virtual environment (python -m venv venv && venv\Scripts\activate && pip install faz-core) or use the module form:python -m faz init,python -m faz serve, etc. — which works regardless ofPATH.
Add a database. The interactive wizard handles connection details per database type:
faz add-databaseOr edit faz.yaml directly:
databases:
- name: your_database_name
type: postgresql
host: localhost
port: 5432
database: myapp
username: readonly_user
password: ${POSTGRES_PASSWORD}
permissions:
# R = select, explain
# W = insert, update, delete
# RW = select, explain, insert, update, delete
# RA = select, explain, insert
# RWA = select, explain, insert, update (no delete)
# A = everything including DDL (create, drop, alter, truncate)
# none = blocked entirely
postgres:
baseline: R # R = read, RW = read-write, RWA = read-write-append, none = blocked
tables:
orders: RW # per-table overrides
audit_log: noneConnect your agent via MCP, or start the REST API:
faz mcp install # auto-configures Claude Desktop + Cursor
faz serve # REST API on localhost:8787That's it. Your agent can now query your databases — every query inspected, every action logged, every dangerous operation blocked.
faz query "SELECT * FROM your_table" # run a query through the safety pipelinefaz exposes four MCP tools to your agent:
| Tool | What it does |
|---|---|
list_databases |
Show connected databases and their schemas |
describe_table |
Inspect a specific table's columns and types |
query |
Run a single-database query through the safety pipeline |
federated_query |
Query across multiple databases and merge results |
When the agent calls query, it gets back either the results:
{
"status": "ok",
"data": { "columns": ["customer_id", "total"], "rows": [...], "row_count": 42 },
"safety": { "stages_passed": ["PROMPT_GUARD", "RBAC", "AST", "INJECTION", "GUARDRAILS"] }
}Or a clear explanation of why the query was blocked:
{
"status": "blocked",
"error": { "stage": "RBAC", "reason": "table 'salaries' requires READ_WRITE, agent has READ_ONLY" }
}The agent sees the same contract whether it's connected via MCP or REST — same tools, same safety pipeline, same audit trail.
Every query goes through 5 stages. Any stage can block the request.
┌─────────────────────────────────────────────────────────────────────┐
│ faz safety pipeline │
│ │
│ (1) Prompt Guard catch destructive intent before parsing │
│ (2) RBAC Gate per-table read/write/append permissions │
│ (3) AST Checker hard-block DDL (DROP, ALTER, TRUNCATE, ...) │
│ (4) Injection Scan tautologies, stacked queries, $where, APOC │
│ (5) Guardrails row caps, timeouts, query rewriting │
│ │
└─────────────────────────────────────────────────────────────────────┘
Stage 1 — Prompt Guard scans the raw request for destructive intent (DROP TABLE, DELETE FROM, INSERT a backdoor) before any parsing happens. Context-aware: "show me deleted records" passes fine.
Stage 2 — RBAC Gate checks per-table permissions. You define a policy matrix in faz.yaml — which databases and tables the agent can read, write, or append to. Supports per-database baselines with per-table overrides. Unauthorized tables are blocked or stripped from federated queries.
Stage 3 — AST Checker parses the query and blocks DDL (CREATE, DROP, ALTER, TRUNCATE, …) for every access level except Admin (A). Defense in depth on top of RBAC: only the explicit A baseline lets DDL through.
Stage 4 — Injection Analyser detects injection patterns per query language: SQL tautologies and stacked statements, MongoDB $where and $function, Cypher APOC abuse, Elasticsearch script injection, and more.
Stage 5 — Guardrails rewrites queries for safety without blocking them. Injects LIMIT clauses, $limit pipeline stages, maxTimeMS timeouts, and size caps so your agent can't accidentally pull a 200M-row table.
MCP is how agents connect to tools. By implementing faz as an MCP server, your agent doesn't need to know anything about database drivers, connection strings, or query languages. It connects to faz once and gets safe access to every database you've configured.
# Auto-configure Claude Desktop, Cursor, and OpenClaw
faz mcp install
# Just one client
faz mcp install --target claude
faz mcp install --target cursor
faz mcp install --target openclaw
# Preview without writing files
faz mcp install --dry-runfaz mcp install writes the MCP config so your client knows how to spawn faz. After that, your agent can start querying immediately.
If you'd rather hand the faz block to OpenClaw's own CLI instead of writing ~/.openclaw/openclaw.json directly, generate a portable config first and pipe it through jq:
# 1. Render the faz mcpServers entry to a standalone file.
faz mcp install --path faz.json
# 2. Register it with OpenClaw using its built-in `mcp set` command.
openclaw mcp set faz "$(jq -c '.mcpServers.faz' faz.json)"
# 3. Confirm the server is registered.
openclaw mcp listfaz mcp install --path faz.json writes the standard {"mcpServers": {"faz": {...}}} envelope, and jq -c '.mcpServers.faz' extracts just the server block — command, args, env — which is the shape OpenClaw's mcp set expects.
faz also exposes a REST API (faz serve on localhost:8787) for non-MCP clients, scripts, and testing. Same pipeline, same audit log, transport: "rest/local" vs "mcp/stdio" in the logs.
Query across multiple databases in a single request. faz resolves dependencies, executes steps in parallel where possible, and merges results with DuckDB:
{
"steps": [
{
"step_id": "s0",
"database": "postgres",
"table": "orders",
"query": "SELECT customer_id, total FROM orders WHERE total > 500"
},
{
"step_id": "s1",
"database": "mongodb",
"table": "customers",
"query": "{\"find\": \"customers\"}",
"depends_on": ["s0"],
"link_from": "customer_id",
"link_to": "_id"
}
],
"merge": "SELECT s1.name, s0.total FROM s0 JOIN s1 ON s0.customer_id = s1._id"
}Each step goes through the full safety pipeline independently. If one step is blocked by RBAC, the rest still execute.
| Category | Databases |
|---|---|
| Relational | PostgreSQL · MySQL · Oracle |
| Document | MongoDB · CouchDB |
| Search | Elasticsearch · OpenSearch |
| Vector | Weaviate · Qdrant · Milvus · Pinecone |
| Graph | Neo4j |
| Wide-column | Cassandra |
| Cloud | DynamoDB |
faz speaks each database's native query language — SQL, MQL, Cypher, ES DSL, DynamoDB operations — and the safety pipeline understands each one. Injection detection for Cypher is different from SQL. faz handles both.
faz.yaml is the single config file. Generate it with faz init, then edit:
databases:
- name: postgres
type: postgresql
host: localhost
port: 5432
database: myapp
username: readonly_user
password: ${POSTGRES_PASSWORD} # env var expansion
- name: mongo
type: mongodb
host: localhost
port: 27017
database: analytics
permissions:
postgres:
baseline: R # default for all tables
tables:
orders: RW # override for specific tables
audit_log: none # block entirely
mongo:
baseline: R
safety:
max_rows_per_query: 1000
query_timeout_seconds: 30faz init # generate faz.yaml + .faz/ directory
faz serve # start REST API on :8787
faz add-database # interactive database setup wizard
faz query "SELECT ..." # run a query through the safety pipeline
faz test # exercise safety against configured DBs
faz logs # pretty-print / tail the audit log
faz policy # print the loaded permission tree
faz mcp # run the MCP stdio server
faz mcp install # write Claude Desktop / Cursor / OpenClaw configs
GET /v1/health liveness probe
GET /v1/databases list connected DBs + schemas
GET /v1/databases/{db}/tables/{table} single-table schema detail
POST /v1/query/simple single-database query
POST /v1/query federated multi-step query
GET /v1/results/{request_id} paginated result retrieval
Every query — allowed or blocked — is logged as structured JSONL in .faz/audit.jsonl:
{
"request_id": "a1b2c3",
"timestamp": "2026-04-30T12:00:00Z",
"database": "postgres",
"table": "orders",
"query": "SELECT ...",
"stages_passed": ["PROMPT_GUARD", "RBAC", "AST", "INJECTION", "GUARDRAILS"],
"status": "ok",
"transport": "rest/local",
"row_count": 42,
"execution_time_ms": 23.4
}Tail live: faz logs --follow. Filter by status: faz logs --status blocked.
The hard part of giving AI agents database access isn't the connector — it's everything around it. Authentication, authorization, injection prevention, row limits, audit trails, and the ability to say "no" to a query that would DROP TABLE users.
Most teams solve this by writing bespoke middleware per database. faz makes it one config file across 14 databases, with safety defaults that are hard to get wrong.
git clone https://github.com/fazhq/faz.git
cd faz
pip install -e ".[dev]"
pytestThis project is licensed under the Apache License 2.0.
