The security gateway for AI agents.
Protect your MCP servers from prompt injection, data exfiltration, and autonomous drift — with sub-millisecond overhead.
Quick Start • OpenClaw • Performance • Features • Dashboard • Architecture • Contributing
MCP is becoming the standard for connecting AI agents to tools — but it has no native permissions model. Real attacks have already been demonstrated:
- A malicious MCP server exfiltrated an entire WhatsApp history by poisoning tool definitions
- Prompt injection via a GitHub issue made the official GitHub MCP server leak private repo data
- MCP tool definitions can mutate after installation — a safe tool on day 1 becomes a backdoor by day 7
Navil sits between your MCP clients and servers as a security proxy. It monitors, detects, and enforces — so your agents stay within bounds.
Developed by Pantheon Lab Pte Ltd.
pip install navil
# Scan an MCP server config for vulnerabilities
navil scan config.json
# Run all 11 SAFE-MCP penetration tests
navil pentest
# Start the security proxy
navil proxy start --target http://localhost:3000That's it. Your agents now connect through Navil instead of directly to the MCP server.
Using OpenClaw? Secure every MCP server in your config with one command:
pip install navil
navil wrap openclaw.jsonThat's it. Navil backs up your original config, then wraps every MCP server with navil shim so all tool calls are monitored, policy-checked, and anomaly-detected — with <3 µs overhead per message.
Why this matters: OpenClaw's skill registry has had 824+ malicious skills identified, and 135,000+ instances are exposed to the public internet.
Before: After:
┌─────────────────────┐ ┌─────────────────────┐
│ "filesystem": { │ │ "filesystem": { │
│ "command": "npx", │ navil │ "command":"navil",│
│ "args": [...] │ ──wrap──► │ "args": ["shim", │
│ } │ │ "--cmd","npx …"]│
└─────────────────────┘ └─────────────────────┘
Every server gets its own agent identity for per-server policy and telemetry. Your env vars, cwd, and other config keys pass through untouched.
# Wrap only specific servers
navil wrap openclaw.json --only filesystem,github
# Attach a policy file to all servers
navil wrap openclaw.json --policy policy.yaml
# Preview changes without modifying anything
navil wrap openclaw.json --dry-run
# Undo: restore your original config
navil wrap openclaw.json --undonavil wrap ~/Library/Application\ Support/Claude/claude_desktop_config.json# Audit your MCP configs for vulnerabilities (0–100 score)
navil scan openclaw.jsonFor OpenClaw instances using MCP over Streamable HTTP, use Navil's HTTP proxy:
navil proxy start --target http://your-mcp-server:3000 --no-authThen point your OpenClaw MCP server URL at http://localhost:9090/mcp instead of the real server.
Navil's security pipeline adds negligible overhead to real workloads. We benchmarked the stdio shim against a mock MCP server to isolate the cost of security checks from network/tool latency.
| Component | Mean | p50 | p99 |
|---|---|---|---|
| Full security check (sanitize + parse + policy + anomaly) | 2.7 µs | 2.4 µs | 6.1 µs |
orjson parse |
0.9 µs | 0.8 µs | 2.0 µs |
| Policy engine lookup | 0.5 µs | 0.4 µs | 1.2 µs |
| Anomaly gate scan | 0.3 µs | 0.3 µs | 0.8 µs |
| Session size | Direct | With Navil | Overhead |
|---|---|---|---|
| Light (5 tool calls) | 11.5 ms | 12.0 ms | +0.5 ms (4.4%) |
| Medium (50 tool calls) | 12.8 ms | 14.2 ms | +1.4 ms (10.8%) |
| Heavy (500 tool calls) | 28.0 ms | 40.3 ms | +12.3 ms |
Context: These benchmarks use a mock server that responds in ~40 µs. Real MCP tools take 1–5,000 ms (file reads, API calls, LLM inference). On any real workload, Navil's overhead is < 0.1% of total session time.
Run the benchmarks yourself:
python bench_shim_latency.py # Per-message breakdown
python bench_total_latency.py # Full session wall-clockAxum-based reverse proxy with HMAC-SHA256 verification, JSON depth limiting, O(1) Redis threshold checks, and minute-bucketed rate limiting. Sub-millisecond overhead per request.
12 statistical detectors with adaptive EMA baselines, operator feedback loops, and learned pattern matching. Runs off the hot path via Redis-bridged telemetry — security analysis never blocks your agents.
Detect plaintext credentials, over-privileged permissions, missing authentication, unverified sources, and malicious patterns. Produces a 0–100 security score.
YAML-driven tool/action allow-lists, per-agent rate limiting, data-sensitivity gates, and suspicious-pattern detection.
11 SAFE-MCP attack simulations that validate your detectors actually catch threats. No real network traffic generated.
AI-powered config analysis, anomaly explanation, policy generation, and self-healing. Bring your own key — supports Anthropic, OpenAI, Gemini, and Ollama (fully local).
Per-agent trust scores with behavioral profiling and anomaly trend analysis. Continuously scores every agent over time and surfaces risk trends before they become incidents.
Issue, rotate, and revoke JWT tokens with JIT provisioning, configurable TTL, usage tracking, and immutable audit logs. Hardened with a global active-credential cap (500), auto-purge of expired credentials, thread-safe rotation (no TOCTOU races), and bearer-token auth on all credential endpoints (set NAVIL_DASHBOARD_TOKEN).
Cloud sync anonymizes all agent identities with HMAC-SHA256, enforces a strict field allowlist, and actively blocks banned fields. Raw data never leaves your deployment. Fully opt-out with NAVIL_DISABLE_CLOUD_SYNC=true. See Privacy Guarantees.
Agents ──> [ Rust Proxy :8080 ] ──> MCP Servers
|
Redis :6379 (thresholds, rate counters, telemetry queue)
|
[ Python Workers :8484 ] (ML detectors, LLM analysis, dashboard)
|
(optional) Navil Cloud (anonymized threat intel)
The Rust proxy handles the hot path: sanitization, HMAC auth, O(1) threshold gates, and rate limiting. It publishes telemetry to a Redis queue. Python workers consume events, run the full anomaly detection suite, recompute thresholds, and sync them back to Redis for the proxy to read.
For the full system design, see ARCHITECTURE.md.
Navil ships with a full-featured 12-page security dashboard for visualizing and managing your MCP fleet.
| Component | Required | Version |
|---|---|---|
| Python | Yes | 3.10+ |
| Redis | Yes (for proxy mode) | 5.0+ |
| Rust | Optional (for Rust proxy) | stable |
| Node.js | Optional (for dashboard dev) | 20+ |
pip install navilWith optional features:
pip install navil[llm] # + AI-powered analysis (Anthropic, OpenAI, Gemini)
pip install navil[cloud] # + Cloud dashboard (FastAPI + React)
pip install navil[all] # Everythinggit clone https://github.com/ivanlkf/navil.git
cd navil
pip install -e ".[dev]"cd navil-proxy
cargo build --releaseThe compiled binary is at navil-proxy/target/release/navil-proxy.
# macOS
brew install redis && redis-server
# Docker
docker run -d -p 6379:6379 redis:7-alpine
# Linux
sudo apt install redis-server && sudo systemctl start rediscd navil-proxy
# Point at your MCP server and Redis
NAVIL_TARGET_URL=http://localhost:3000 \
NAVIL_REDIS_URL=redis://127.0.0.1:6379 \
NAVIL_PORT=8080 \
cargo run --releaseYour agents now connect to http://localhost:8080/mcp instead of directly to the MCP server.
Optional: enable HMAC request signing:
NAVIL_HMAC_SECRET=your-secret-key cargo run --releasepip install navil[cloud]
navil cloud serve # Opens at http://localhost:8484The Python control plane automatically connects to Redis, consumes telemetry from the Rust proxy, runs anomaly detection, and serves the dashboard.
navil seed-database # 10 scenarios x 1,000 iterations
navil seed-database -n 5000 # More iterations for tighter baselines
navil seed-database --json # Machine-readable outputThis populates the BehavioralAnomalyDetector with synthetic attack data so the statistical thresholds (mean + 5*std_dev) have historical baselines from day one.
If you don't need the Rust data plane, the Python proxy works standalone:
navil proxy start --target http://localhost:3000# Uses ANTHROPIC_API_KEY env var automatically
navil llm analyze-config config.json
# Or specify provider explicitly
navil llm generate-policy "only allow read access to logs" --provider gemini
navil llm explain-anomaly '{"type": "rate_spike", "agent": "bot-1"}' --provider openaiOllama is also supported for fully local, offline AI analysis:
navil cloud serve
# Then configure in Settings: provider=openai, base_url=http://localhost:11434/v1, model=llama3.2navil credential issue --agent my-agent --scope "read:tools" --ttl 3600navil policy check --tool file_system --agent my-agent --action read| Command | Description |
|---|---|
navil scan <config> |
Scan MCP config for vulnerabilities (0-100 score) |
navil pentest |
Run SAFE-MCP penetration tests (11 attack scenarios) |
navil proxy start |
Start Python MCP security proxy |
navil proxy stop |
Stop the running proxy |
navil cloud serve |
Launch Navil Cloud dashboard |
navil seed-database |
Populate ML baselines with synthetic attack data |
navil credential issue |
Issue a new JWT credential |
navil credential revoke |
Revoke an active credential |
navil credential list |
List credentials with filters |
navil policy check |
Evaluate a tool call against policy |
navil wrap <config> |
One-command setup: wrap all MCP servers in a config with navil shim |
navil shim |
Wrap a single stdio MCP server with security checks |
navil monitor start |
Start anomaly monitoring mode |
navil report |
Generate security report |
navil llm analyze-config |
AI-powered config analysis |
navil llm explain-anomaly |
AI-powered anomaly explanation |
navil llm generate-policy |
Generate policy from natural language |
navil llm suggest-healing |
AI-powered remediation suggestions |
| Variable | Default | Purpose |
|---|---|---|
NAVIL_TARGET_URL |
http://localhost:3000 |
Upstream MCP server (Rust proxy) |
NAVIL_REDIS_URL |
redis://127.0.0.1:6379 |
Redis connection (Rust proxy) |
NAVIL_HMAC_SECRET |
(none) | HMAC signing key for request auth |
NAVIL_PORT |
8080 |
Rust proxy listen port |
NAVIL_DISABLE_CLOUD_SYNC |
false |
Disable cloud telemetry sync |
NAVIL_API_KEY |
(none) | Navil Cloud API key (paid mode) |
NAVIL_INTEL_SYNC_INTERVAL |
3600 |
Seconds between outbound cloud sync cycles |
NAVIL_INTEL_FETCH_INTERVAL |
3600 |
Seconds between inbound pattern fetch cycles |
NAVIL_DEPLOYMENT_SECRET |
(auto-generated) | Secret for HMAC agent anonymization |
NAVIL_CLOUD_URL |
https://api.navil.ai |
Navil Cloud API base URL |
NAVIL_DASHBOARD_TOKEN |
(none) | Bearer token for credential endpoints (unset = open in dev) |
ANTHROPIC_API_KEY |
(none) | Anthropic API key for LLM features |
OPENAI_API_KEY |
(none) | OpenAI API key for LLM features |
GEMINI_API_KEY |
(none) | Google Gemini API key for LLM features |
ALLOWED_ORIGINS |
* |
CORS origins for dashboard API |
When cloud sync is enabled, Navil enforces strict privacy guarantees at the transmission boundary:
- Agent identities are replaced with one-way HMAC-SHA256 hashes using a per-deployment secret. Cannot be reversed.
- Only numeric aggregates and categorical labels leave the deployment (severity, confidence, duration, bytes, anomaly type).
- Raw data is actively blocked — agent names, tool arguments, evidence, file paths, server URLs, IP addresses, and prompts are stripped. A runtime check raises
ValueErrorif any banned field leaks through. - Fully opt-out with
NAVIL_DISABLE_CLOUD_SYNC=true.
See ARCHITECTURE.md for the full field allowlist/blocklist.
Navil operates on a Mutual Defense model. AI threats evolve in minutes, not months. A prompt injection discovered on one machine should protect every other machine within seconds.
The Give: Your local Navil instance detects a new attack pattern and sends a sanitized metadata snippet — anomaly type, severity, confidence score, tool name, and timing — to the central hub. Agent identities are HMAC-anonymized. Raw data never leaves your machine. You can audit exactly what is sent by inspecting navil/cloud/telemetry_sync.py.
The Get: In exchange, your instance receives real-time updates from the Global Threat Blocklist — a curated feed of malicious patterns discovered by thousands of other Navil nodes. The built-in ThreatIntelFetcher polls GET /v1/threat-intel/patterns on startup and periodically thereafter, publishing patterns to the local PatternStore for confidence-boosted anomaly detection.
- Local Sanitization — All telemetry is stripped of PII, secrets, and raw prompt content on your machine before it ever reaches our servers.
- No Raw Data — We never see your AI's conversations. We only see the shape of the attack (anomaly type, severity, timing, tool name).
- Deterministic Deduplication — Each sync event carries a UUID5
event_uuidso the cloud can deduplicate without storing raw identifiers. - Full Transparency — You can audit exactly what is being sent by inspecting
navil/cloud/telemetry_sync.py.
| Tier | Telemetry Sharing | Global Blocklist Access |
|---|---|---|
| Community (OSS) | Required (default) | Full access (crowdsourced feed) |
| Dark Site (OSS) | Disabled | No global updates (local-only protection) |
| Pro / Team (Paid) | Optional ("privacy premium") | Premium access (real-time + verified feed) |
For enterprises whose security policy prohibits outbound telemetry: provide a valid NAVIL_API_KEY to receive threat intelligence without sharing your own signals. Visit navil.ai to get a key.
# Community mode (default): share and receive
navil cloud serve
# Paid mode: receive without sharing
NAVIL_API_KEY=nvl_your_key NAVIL_DISABLE_CLOUD_SYNC=true navil cloud serve# Install dev dependencies
pip install -e ".[dev]"
# Run tests (473 tests)
pytest
# Lint
ruff check .
# Type check
mypy navil
# Build Rust proxy
cd navil-proxy && cargo build --release
# Dashboard (requires Node.js 20+)
cd dashboard && npm install && npm run devWe welcome contributions! See CONTRIBUTING.md for development setup, coding standards, and how to submit changes.
See SECURITY.md for our vulnerability disclosure policy.
Navil uses a dual-license model:
| Component | License |
|---|---|
Core CLI, anomaly detection, proxy, adaptive ML, Rust data plane (navil/, navil/adaptive/, navil-proxy/) |
Apache 2.0 |
Cloud dashboard, LLM features, API server (navil/cloud/, navil/llm/, dashboard/) |
Business Source License 1.1 |
Apache 2.0 — free to use, modify, and redistribute for any purpose.
BSL 1.1 — free for internal use and self-hosting. You may not offer the Licensed Work as a competing hosted service. Each release converts to Apache 2.0 four years after its publication date.
Commercial licensing enquiries: https://github.com/ivanlkf/navil/issues










