MITM proxy and dashboard for AI CLI traffic.
- See what AI CLI tools actually send to model providers over HTTP, websocket, and SSE.
- Catch secrets, tokens, private code, customer fragments, and telemetry before they leave the machine or network.
- Attach listener callbacks for audit logs, Telegram alerts, dashboards, analytics, session recording, or RAG ingest.
- Attach decision handlers that can
allow,block,modify,replace, orroutelive traffic. - Route simple work to local models such as Ollama or llama.cpp and reserve external models for harder requests.
- Keep one control point across fast-changing AI CLIs instead of rebuilding integrations for every client update.
- Build agent orchestration on top of normalized traffic events, not only terminal/PTY control.
Start the proxy and embedded dashboard:
docker build -t agent-shield .
docker rm -f agent-shield 2>/dev/null || true
docker run -d --name agent-shield \
-p 8888:8888 \
-p 9999:9999 \
-v ~/.mitmproxy:/root/.mitmproxy:ro \
-v /tmp/agent-shield-bodies:/tmp/agent-shield-bodies \
agent-shieldOpen the dashboard:
http://127.0.0.1:9999Ports:
- proxy:
:8888 - dashboard:
:9999
Install the ash launcher:
./scripts/init.shThis installs:
- launcher state under
~/.agent-shield - public CA cert under
~/.agent-shield/certs - config under
~/.agent-shield/config.env ashsymlink into/usr/local/bin/ashwhen writable, otherwise~/.local/bin/ash
Run AI CLI tools through Agent Shield:
ash codex
ash claude
ash gemini
ash gemini -p 'say ok'If the proxy runs on another machine, point the client at that server:
AGENT_SHIELD_PROXY_URL=http://SERVER_HOST:8888 ash codexShow resolved client env:
ash envFor Gemini, set a specific credential home if needed:
AGENT_SHIELD_GEMINI_HOME=/path/to/gemini-home ash geminiFor gemini, ash also clears NO_BROWSER by default so Gemini uses the normal browser callback flow.
Compatibility note:
- scripts/with-proxy-env.sh is now just a thin shim to scripts/ash.sh
HTTP proxy path:
curl -k -I -x http://127.0.0.1:8888 https://api.anthropic.comGemini path:
ash gemini -p 'say ok'Expected result:
- terminal prints
ok - dashboard shows model-provider traffic
- captured request/response bodies appear under
/tmp/agent-shield-bodies
Reference materials:
- project architecture diagram
- listener dashboard screenshots
- current runtime flow
- interceptor hook flow
Local promo drafts and publication assets belong under ignored press/.
Current listeners:
- proxy:
:8888 - embedded dashboard:
:9999by default
Current behavior:
- all traffic explicitly sent to this proxy is MITM-inspected by default
passandblocklists are still applied as policy exceptions- request/response bodies are captured for HTTP and websocket/SSE model traffic
- raw normalized events can also be published to NATS/JetStream for external subscribers
- raw events now include inline payloads, so external consumers do not need local body files just to render traffic
Public CA cert for client trust is stored in certs/mitmproxy-ca-cert.pem.
This file is safe to keep in the repo because it is the public certificate only.
The private CA key is not committed. Runtime still uses:
~/.mitmproxy/mitmproxy-ca.pemwhen present- otherwise a generated local CA under
/tmp/agent-shield-ca
Optional NATS/JetStream event bus:
docker run -d --name agent-shield \
-p 8888:8888 \
-p 9999:9999 \
-e AGENT_SHIELD_NATS_URL=nats://127.0.0.1:4222 \
-e AGENT_SHIELD_NATS_STREAM=ash_events \
-e AGENT_SHIELD_NATS_SUBJECT=ash.events.raw \
-v ~/.mitmproxy:/root/.mitmproxy:ro \
-v /tmp/agent-shield-bodies:/tmp/agent-shield-bodies \
agent-shieldRelevant env vars:
AGENT_SHIELD_PROXY_PORTAGENT_SHIELD_NATS_URLAGENT_SHIELD_NATS_STREAMAGENT_SHIELD_NATS_SUBJECTAGENT_SHIELD_NATS_QUEUE_CAPACITY
Disable the embedded dashboard when running a separate subscriber/UI:
-e AGENT_SHIELD_DISABLE_EMBEDDED_DASHBOARD=1Optional sync decision path over NATS request/reply:
-e AGENT_SHIELD_DECISION_NATS_URL=nats://127.0.0.1:4222 \
-e AGENT_SHIELD_DECISION_NATS_SUBJECT=ash.hooks.decision \
-e AGENT_SHIELD_DECISION_TIMEOUT_MS=1500Current decision enforcement coverage:
connect.pre:allow,blockhttp.request:allow,block,modify,replacehttp.response:allow,block,modify,replacews.message.out:allow,block,modify,replacews.message.in:allow,block,modify,replacesse.event.in:allow,block,modify,replace
Streaming behavior notes:
- blocking
ws.message.*drops the current websocket message and closes the upgraded tunnel - blocking
sse.event.inremoves the event from the rebuilt SSE stream before the HTTP response is sent back - modifying
sse.event.inrewrites the emitteddata:payload - modifying
ws.message.*rewrites the forwarded websocket frame payload
The crate now also builds a standalone dashboard subscriber binary: ash-dashboard.
The default image now contains both runtime binaries:
agent-shieldash-dashboard
It consumes raw interceptor events from NATS/JetStream and serves the same UI without living inside the proxy process.
Typical env:
AGENT_SHIELD_DASHBOARD_NATS_URL=nats://127.0.0.1:4222
AGENT_SHIELD_DASHBOARD_NATS_STREAM=ash_events
AGENT_SHIELD_DASHBOARD_NATS_SUBJECT=ash.events.raw
AGENT_SHIELD_DASHBOARD_CONSUMER=ash_dashboard
AGENT_SHIELD_DASHBOARD_PORT=9999
AGENT_SHIELD_BODY_DIR=/tmp/agent-shield-bodiesRecommended split:
- run
agent-shieldwithAGENT_SHIELD_DISABLE_EMBEDDED_DASHBOARD=1 - run
ash-dashboardas a separate process/container subscribing to the same NATS stream
Current delivery model for ash-dashboard:
- durable JetStream consumer
- explicit ack after event is stored
- event order preserved by JetStream consumer delivery
The crate now also builds a standalone decision service binary: ash-orchestrator.
It serves sync NATS request/reply hooks for the Interceptor and returns DecisionEnvelope responses.
Current bundled behavior:
- classify telemetry, control-plane, model HTTP, and model WS traffic
- block telemetry by default through the built-in
telemetry_blockeradapter - block obvious secrets through the built-in
secret_scanner - allow everything else unchanged
Typical env:
AGENT_SHIELD_ORCHESTRATOR_NATS_URL=nats://127.0.0.1:4222
AGENT_SHIELD_ORCHESTRATOR_NATS_SUBJECT=ash.hooks.decisionOptional REST callbacks:
AGENT_SHIELD_ORCHESTRATOR_LISTENER_URL=http://127.0.0.1:18081/listener
AGENT_SHIELD_ORCHESTRATOR_LISTENER_URLS=http://127.0.0.1:18081/listener,http://127.0.0.1:18083/listener
AGENT_SHIELD_ORCHESTRATOR_LISTENER_PHASES=http.request,http.response,ws.message.out,ws.message.in,sse.event.in
AGENT_SHIELD_ORCHESTRATOR_HANDLER_URL=http://127.0.0.1:18082/handler
AGENT_SHIELD_ORCHESTRATOR_HANDLER_PHASES=http.request,ws.message.out,sse.event.in
AGENT_SHIELD_ORCHESTRATOR_LISTENER_TIMEOUT_MS=500
AGENT_SHIELD_ORCHESTRATOR_HANDLER_TIMEOUT_MS=1500Current callback behavior:
- listener is best-effort and does not block the final response
- handler is called with timeout and returns
allow|block|modify|replace|route - on handler timeout or callback failure, the Orchestrator falls back to
allow - listeners and handler receive the same versioned
HandlerContextJSON payload
Current built-in scanner behavior:
- blocks obvious outbound and inbound secrets in
http.request,http.response,ws.message.out,ws.message.in, andsse.event.in - skips telemetry and control-plane traffic
- returns
403with reasons likesecret_detected:openai_key
Demo FastAPI callbacks live under examples/fastapi:
Quick demo startup:
cd examples/fastapi
python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt
uvicorn listener:app --host 127.0.0.1 --port 18081
uvicorn handler:app --host 127.0.0.1 --port 18082The demo handler logs the incoming callback and appends hello to primary_text for outgoing request/message phases.
HandlerContext currently includes:
schema_versionevent_id,session_id,event_seq,session_seqphase,transport,direction,method,url,domain,status,actioncontent_type,traffic_classreq_headers,resp_headersreq_body,resp_bodyprimary_text,preview
A new external dashboard app now lives in apps/dashboard.
Stack:
ViteReact + TypeScriptTailwind CSSTanStack QueryTanStack Table
It consumes the existing Interceptor API:
/api/traffic/api/events/api/stats/api/body/{name}
Run it locally:
cd apps/dashboard
npm install
npm run devBy default the Vite dev server proxies /api/* to:
http://127.0.0.1:9999Override the backend target if needed:
AGENT_SHIELD_DASHBOARD_API_TARGET=http://127.0.0.1:9999 npm run devProduction build:
cd apps/dashboard
npm run buildGemini CLI uses a Node.js auth/runtime stack. The proxy was already working, but Node did not trust the MITM certificate by default, so TLS died with certificate validation errors and the proxy only saw handshake EOFs.
The fix was not a Gemini source patch. The fix is making the runtime trust the proxy CA:
NODE_EXTRA_CA_CERTS=$PWD/certs/mitmproxy-ca-cert.pem
HTTPS_PROXY=http://127.0.0.1:8888The wrapper script sets that automatically.
Decision-path smoke notes:
- running
ash-orchestratoronash.hooks.decisionshould allow normal model traffic and block telemetry such asplay.googleapis.com /log?format=json&hasfast=truewith403 - blocking
sse.event.inshould leave the Gemini request without the final streamed text and incrementtotal_blocked - blocking
ws.message.inshould force Codex reconnects and logdecision_action=blockon the websocket event

