Obex is a lightweight, statically-compiled sidecar proxy written in Go that makes any MCP server MCP-OAuth-compliant with zero backend changes.
- Phase 1 — Protocol Bridge: Proxies requests, serves PRM discovery, returns MCP-compliant 401 challenges
- Phase 2 — JWT Validation: Cryptographic token validation, JWKS cache, multi-issuer support, scope enforcement
- Phase 3 — DPoP Sender Constraining: RFC 9449 proof-of-possession, JTI replay cache (memory + Redis)
All three phases are implemented. Which ones are active is purely a config choice —
comment or uncomment blocks in your obex.yaml.
| Phase | What Obex checks | What you need |
|---|---|---|
| 1 | That a Bearer token header is present | Nothing — no external dependency |
| 2 | That the token is cryptographically valid, not expired, has correct scopes | An IdP |
| 3 | That the token is bound to the specific client making the request (DPoP) | An IdP + a DPoP-capable client |
Start with Phase 1 to verify connectivity, add Phase 2 when you're ready to lock it down.
An Identity Provider (IdP) is a service that issues and signs JWT tokens. Think of it as a passport office:
- IdP — issues and cryptographically signs tokens (passports)
- JWT token — a signed, expiring proof of identity presented on each request
- Obex — validates the token's signature against the IdP's public keys, checks expiry and scopes
- MCP server — never sees unauthenticated traffic; receives trusted
X-Obex-*headers instead
The client (e.g. Claude Code on a remote machine) first obtains a token from the IdP, then presents it to Obex on every MCP request. Obex fetches the IdP's public keys (JWKS endpoint) once and caches them — no round-trip to the IdP per request.
Obex works with any OIDC-compliant IdP — it only needs a JWKS endpoint to fetch public keys and a standard JWT with the right claims. There is no vendor lock-in.
| Option | Hosting | Best for |
|---|---|---|
| Keycloak | Self-hosted, runs in Docker | Full control, on-prem, team use |
| Authentik | Self-hosted, runs in Docker | Modern UI, easier setup than Keycloak |
| Auth0 | Cloud, free tier | Quickest to get started, no infra |
| Okta | Cloud, free developer tier | Full OAuth 2.1 + DPoP support |
| Any OIDC-compliant provider | — | Works as long as it issues standard JWTs |
Note on Auth0 and Okta in this documentation: Auth0 and Okta were used during development and testing because their free tiers happen to cover the exact features needed — Auth0 for Phase 2 JWT validation (RFC 9068 JWT profile, scopes, M2M tokens) and Okta for Phase 3 DPoP (the Integrator Free tier supports DPoP-bound tokens out of the box). They are not required or recommended over any other provider. The step-by-step guides below are included because they were tested and verified; equivalent steps exist for Keycloak, Authentik, and any other standards-compliant IdP.
From whichever IdP you choose, you need two URLs for the Obex config:
issuer— the IdP's base URL (e.g.https://auth.example.com/realms/myrealm/)jwks_uri— where Obex fetches public keys (e.g..../protocol/openid-connect/certs)
Auth0 is the fastest way to get Phase 2 working — no infrastructure to run. The free tier allows 1,000 M2M tokens/month, sufficient for testing and small teams.
1. Sign up at auth0.com — creates your tenant (e.g. acme.us.auth0.com).
2. Create an API (Left sidebar → Applications → APIs → Create API)
- Name: anything descriptive, e.g.
MCP Servers - Identifier:
https://your-docker-host-ip/mcp— this is just a unique string (theaudienceclaim in tokens), not required to be reachable. Must behttps://— Obex validatesresource.urlagainst it and requires https. Use a path without a port so one identifier covers all Obex instances. - JWT Profile: select
RFC 9068(standard format — not the proprietary Auth0 format) - Signing Algorithm: RS256
- Access Settings: both dropdowns →
Allow via client-grant - Token Sender-Constraining / RBAC / JSON Web Encryption: leave all off
- After creating, go to the Permissions tab → Add permission:
- Permission:
mcp:read - Description:
Read access to MCP servers
- Permission:
3. Create a Machine-to-Machine Application (Applications → Applications → Create Application)
- Type: Machine to Machine
- Name: e.g.
obex-client - Auth0 shows a dropdown to authorize it against an API — select your
MCP ServersAPI - You will see a list of permissions — do not check anything here yet, just confirm
- From the Settings tab, note your
client_idandclient_secret
4. Grant the scope to the application This step is easy to miss — Auth0 requires you to explicitly grant scopes to each M2M client:
- Applications → Applications → your app → APIs tab
- Find
MCP Serversin the list → expand the row - Check
mcp:read→ click Update
Without this step, Auth0 returns access_denied: Client has not been granted scopes: mcp:read
even if the scope exists on the API.
5. Configure Obex — set in obex/mariadb.yaml and obex/clickhouse.yaml:
idp:
issuer: "https://acme.us.auth0.com/"
jwks_uri: "https://acme.us.auth0.com/.well-known/jwks.json"
clock_skew: "10s"
accepted_algorithms: [RS256]
authz:
required_scopes: [mcp:read]
resource:
url: "https://your-docker-host-ip/mcp" # must match Auth0 API Identifier exactlyRestart Obex containers after editing. Verify jwks: ok in /ready response.
6. Test token issuance:
curl -s -X POST https://acme.us.auth0.com/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "YOUR_CLIENT_ID",
"client_secret": "YOUR_CLIENT_SECRET",
"audience": "https://your-docker-host-ip/mcp",
"scope": "mcp:read"
}'You get back a JWT. Inspect it at jwt.io — verify aud matches your
resource.url and scope contains mcp:read.
7. Test against Obex:
TOKEN=$(curl -s -X POST https://acme.us.auth0.com/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "YOUR_CLIENT_ID",
"client_secret": "YOUR_CLIENT_SECRET",
"audience": "https://your-docker-host-ip/mcp",
"scope": "mcp:read"
}' | jq -r .access_token)
curl -ik -H "Authorization: Bearer $TOKEN" https://your-docker-host-ip:8443/mcp
# expect: 405 from the MCP server (valid JWT accepted by Obex, MCP rejects plain GET — correct)8. Use the token in settings.local.json on the remote machine:
"headers": { "Authorization": "Bearer <jwt-from-step-6>" }Auth0 gotchas:
- The API Identifier must be
https://— Obex requiresresource.urlto be https and validatesaudagainst it exactly. Usinghttp://or including a port number creates a mismatch. - You cannot change an API Identifier after creation — delete and recreate the API if you get it wrong.
- Auth0 auto-creates a test application when you create an API — ignore it, use your own M2M app.
- Scope must be explicitly granted to the client (step 4) — creating the scope on the API alone is not enough.
- Always include
"scope": "mcp:read"in the token request — Auth0 M2M does not include scopes by default. - One M2M Application covers all agents — they all share the same
client_id/secret - Tokens expire after 24h by default; clients should cache and refresh on expiry
- Free tier limit is 1,000 M2M tokens/month — for production with many agents, use a self-hosted IdP (Authentik, Keycloak) which has no token limits
The examples below use MariaDB and ClickHouse MCP servers — the two servers included in this repo as working examples. They are not special or required by Obex. Any MCP server that supports Streamable HTTP transport works the same way. See
mcp/README.mdfor how to add your own.
Once Obex is up and a valid JWT has been obtained (see Auth0 setup above), you can make actual MCP tool calls through the proxy.
All Streamable HTTP MCP requests must include:
Accept: application/json, text/event-stream
Requests without this header receive a 406 Not Acceptable from the MCP server.
TOKEN=$(curl -s -X POST https://your-tenant.us.auth0.com/oauth/token \
-H "Content-Type: application/json" \
-d '{"grant_type":"client_credentials","client_id":"CLIENT_ID",
"client_secret":"CLIENT_SECRET","audience":"https://SERVER_IP/mcp",
"scope":"mcp:read"}' | jq -r .access_token)
curl -sk -X POST https://SERVER_IP:8443/mcp \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call",
"params":{"name":"mysql_query","arguments":{"sql":"SHOW TABLES"}}}'@benborla29/mcp-server-mysql exposes a single tool: mysql_query. Pass any
read-only SQL as the sql argument (INSERT/UPDATE/DELETE are disabled server-side).
mcp-clickhouse with Streamable HTTP transport requires a session handshake.
The server returns an mcp-session-id header on the initialize response; include it
in all subsequent requests.
# Step 1 — initialize, capture session ID
SESSION=$(curl -sk -X POST https://SERVER_IP:8444/mcp \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize",
"params":{"protocolVersion":"2024-11-05","capabilities":{},
"clientInfo":{"name":"test","version":"1.0"}}}' \
-D - 2>&1 | grep -i "mcp-session-id" | awk '{print $2}' | tr -d '\r')
# Step 2 — list tables
curl -sk -X POST https://SERVER_IP:8444/mcp \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-H "mcp-session-id: $SESSION" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/call",
"params":{"name":"list_tables",
"arguments":{"database":"YOUR_DB","include_detailed_columns":false}}}'mcp-clickhouse exposes three tools: list_databases, list_tables, run_select_query.
Transport: Use CLICKHOUSE_MCP_SERVER_TRANSPORT=http (Streamable HTTP). Do not
use sse — SSE streams do not flow reliably through Go's reverse proxy (auth passes but
the client receives no data).
Port: clickhouse-connect (the library mcp-clickhouse uses) talks to ClickHouse over
its HTTP interface:
- Port
8123— plain HTTP - Port
8443— HTTPS
Port 9000 is the native binary protocol and will not work. Set CLICKHOUSE_SECURE=false
when using port 8123. Example .env:
CLICKHOUSE_PORT=8123
CLICKHOUSE_SECURE=false
Once Obex is running, you can connect Claude Code CLI directly to your MCP server.
Add entries to ~/.claude.json under mcpServers. Claude Code uses "type": "http"
(not "streamable-http") in this file:
"mcpServers": {
"my-mariadb": {
"type": "http",
"url": "https://SERVER_IP:8443/mcp",
"headers": {
"Authorization": "Bearer phase1-token"
}
},
"my-clickhouse": {
"type": "http",
"url": "https://SERVER_IP:8444/mcp",
"headers": {
"Authorization": "Bearer YOUR_JWT"
}
}
}my-mariadb is running Phase 1 — Obex requires a Bearer token to be present but does
not validate it. Any non-empty string works.
my-clickhouse is running Phase 2 — a real JWT is required. Obtain one first
(see Auth0 setup) and paste it as the Bearer
value. Tokens expire after 24 h — update the header when they do.
Claude Code runs on Node.js, which rejects self-signed certificates by default. Set this environment variable before starting Claude Code:
export NODE_EXTRA_CA_CERTS=/path/to/obex_stack/certs/server.crtAdd it to ~/.bashrc to make it permanent. With TLS_MODE=provided or TLS_MODE=external
(a CA-signed cert), this step is not needed.
Start Claude Code, then run /mcp — your servers should appear as connected. You can
then ask Claude to query the MCP servers directly in the chat.
docker build -t obex:dev .The resulting image is < 15 MB (multi-stage build with distroless base, UPX-compressed
binary). It runs as a non-root user (UID 65532, distroless nonroot).
TLS mode is controlled by a single variable in your .env file.
Choose the mode that matches your environment:
TLS_MODE |
Who it's for | What you need to do |
|---|---|---|
self-signed |
Home networks, LAN, local testing | Nothing — cert is generated automatically |
provided |
Enterprise / internal CA | Place your cert files in certs/ |
external |
Reverse proxy (Nginx, Caddy, Traefik) in front of Obex | Configure your proxy; set OBEX_TLS_ENABLED=false |
TLS_MODE=self-signed
OBEX_TLS_ENABLED=trueNothing else required. When the stack starts, a self-signed certificate is
generated automatically for SERVER_IP. The cert is valid for 10 years and is
regenerated automatically if less than 30 days remain on the current cert.
Clients must trust this cert or use -k / --insecure (acceptable on a home LAN
where you control both ends). Claude Code and similar MCP clients let you add
the cert to the system trust store once and then connect normally.
TLS_MODE=provided
OBEX_TLS_ENABLED=truePlace your certificate files in the certs/ directory before starting:
certs/server.crt ← certificate (or full chain with intermediates)
certs/server.key ← private key
At startup the script validates the cert and behaves as follows:
| Cert state | What happens |
|---|---|
| Files missing | Hard fail (exit 1) — Obex containers do not start |
| Already expired | Hard fail (exit 1) — Obex containers do not start |
| Expiring within 30 days | Warning printed, stack starts — renew soon |
| Valid | Provided certificate OK. — stack starts normally |
Rotation: replace the files and docker compose restart obex-mariadb obex-clickhouse.
TLS_MODE=external
OBEX_TLS_ENABLED=false # required — Obex runs plain HTTP; the proxy does TLSYour reverse proxy (Nginx, Caddy, Traefik, etc.) terminates TLS and forwards
plain HTTP to Obex. No cert files needed in certs/. This is the standard
enterprise deployment pattern when you already have a TLS-terminating ingress.
All three modes have been verified. The test procedures are below.
# .env: TLS_MODE=self-signed, OBEX_TLS_ENABLED=true (defaults)
docker compose up -d
curl -sk https://SERVER_IP:8443/health # {"status":"ok"}
curl -sk https://SERVER_IP:8444/health # {"status":"ok"}The cert init script runs all validation before Obex starts. Test each case:
# Happy path — valid cert
# .env: TLS_MODE=provided, OBEX_TLS_ENABLED=true
# certs/server.crt and server.key present and valid
docker compose up -d --force-recreate obex-mariadb obex-clickhouse
# certs init prints: "Provided certificate OK."
curl -sk https://SERVER_IP:8443/health # {"status":"ok"}
# Error path — missing files
# Remove or rename certs/server.crt and certs/server.key, then:
docker compose up -d --force-recreate obex-mariadb obex-clickhouse
# certs init prints: "ERROR: TLS_MODE=provided but certificate files not found."
# exit 1 — Obex containers do not start
# Error path — already expired cert
# Place an expired cert at certs/server.crt, then:
docker compose up -d --force-recreate obex-mariadb obex-clickhouse
# certs init prints: "ERROR: Provided certificate has already expired."
# exit 1 — Obex containers do not start
# Warning path — cert expiring within 30 days
# Place a cert with <30 days remaining at certs/server.crt, then:
docker compose up -d --force-recreate obex-mariadb obex-clickhouse
# certs init prints: "WARNING: Provided certificate expires within 30 days."
# exit 0 — Obex starts, warning visible in: docker compose logs certsTo generate a short-lived test cert (expiring in 10 days) to exercise the warning path:
openssl req -x509 -newkey rsa:2048 -nodes \
-keyout certs/server.key -out certs/server.crt \
-days 10 -subj "/CN=SERVER_IP" -addext "subjectAltName=IP:SERVER_IP"A standalone Nginx test harness lives in test/external-tls/. It joins the main
Docker network and adds an HTTPS layer in front of the plain-HTTP Obex containers.
# Step 1 — certs must exist (run self-signed mode once first, or place your own)
# Step 2 — switch to external mode
# .env: TLS_MODE=external, OBEX_TLS_ENABLED=false
docker compose up -d --force-recreate obex-mariadb obex-clickhouse
# Step 3 — start Nginx
docker compose -f test/external-tls/docker-compose.yml up -d
# Step 4 — verify Obex speaks plain HTTP (external mode active)
curl -i http://SERVER_IP:8443/health # HTTP/1.1 200, no TLS
curl -i http://SERVER_IP:8444/health
# Step 5 — verify Nginx speaks HTTPS and proxies through
curl -ik https://SERVER_IP:9443/health # HTTP/1.1 200, via Nginx TLS
curl -ik https://SERVER_IP:9444/health
# Step 6 — full MCP tool call through Nginx
curl -sk -X POST https://SERVER_IP:9443/mcp \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call",
"params":{"name":"mysql_query","arguments":{"sql":"SHOW TABLES"}}}'
# Tear down
docker compose -f test/external-tls/docker-compose.yml down
# Restore .env: TLS_MODE=self-signed, OBEX_TLS_ENABLED=true
docker compose up -d --force-recreate obex-mariadb obex-clickhousedocker run --rm \
-v ./obex.yaml:/etc/obex/obex.yaml:ro \
-p 8443:8443 \
obex:devmake build # compile
make test # run tests
make docker # build Docker image
make fmt # format all Go files
make vet # run go vet
make lint # run golangci-lint
# Run directly against a specific config (skips Docker Compose):
make run CONFIG=obex/mariadb.yamlobex:
public_url: "https://my-tool.example.com"
listen: ":8443"
tls:
enabled: false # set to true for production
backend:
url: "http://localhost:9090"
resource:
url: "https://my-tool.example.com/mcp"Phase 1 provides no authentication. Obex only checks that an
Authorization: Bearerheader exists — the token value is never verified. Any string passes. Use Phase 1 to confirm connectivity, then add anidp:block to enable real JWT validation (Phase 2) before exposing to untrusted networks.
obex:
public_url: "https://my-tool.example.com"
listen: ":8443"
tls:
enabled: false
backend:
url: "http://localhost:9090"
resource:
url: "https://my-tool.example.com/mcp"
scopes_supported: [mcp:read, mcp:write]
idp:
issuer: "https://auth.your-company.com/"
jwks_uri: "https://auth.your-company.com/.well-known/jwks.json"
clock_skew: "10s"
accepted_algorithms: [RS256, ES256]
authz:
required_scopes: [mcp:read]Add to the Phase 2 config:
dpop:
enabled: true
required: false # true = reject requests without DPoP header
proof_lifetime: "60s"
replay_cache:
backend: "memory" # or "redis" for multi-replica
max_entries: 12000| Endpoint | Auth | Description |
|---|---|---|
GET /health |
No | Liveness probe. Always returns {"status":"ok"} |
GET /ready |
No | Readiness probe. Checks backend, JWKS (Phase 2+), Redis (Phase 3) |
GET /.well-known/oauth-protected-resource |
No | Protected Resource Metadata (PRM) document |
GET /metrics |
No | Prometheus metrics |
ANY /* |
Yes | All other paths are reverse-proxied after auth |
GET /health → 200 {"status":"ok"}
Phase 1: {"status":"ready","checks":{"backend":"ok"}}
Phase 2: {"status":"ready","checks":{"backend":"ok","jwks":"ok"}}
Phase 3+Redis: {"status":"ready","checks":{"backend":"ok","jwks":"ok","replay_cache":"ok"}}
GET /.well-known/oauth-protected-resource → 200
Cache-Control: public, max-age=3600
{
"resource": "https://my-tool.example.com/mcp",
"authorization_servers": ["https://auth.your-company.com/"],
"scopes_supported": ["mcp:read", "mcp:write"],
"bearer_methods_supported": ["header"],
"dpop_signing_alg_values_supported": ["RS256", "ES256"],
"obex_extensions": {"elicitation_supported": false}
}
GET /any-path (no token)
→ 401 Unauthorized
WWW-Authenticate: Bearer realm="mcp", resource_metadata="https://my-tool.example.com/.well-known/oauth-protected-resource"
{"error":"unauthorized","reason":"no_token"}
After successful JWT validation, Obex injects these headers:
| Header | Content |
|---|---|
X-Obex-User |
sub claim from token |
X-Obex-Scopes |
Normalized scope set (sorted, deduped) |
X-Obex-Issuer |
iss claim |
X-Obex-Client |
client_id claim |
X-Obex-Request-ID |
UUID for request correlation |
X-Obex-DPoP-Bound |
"true" if DPoP validated |
Headers Authorization and DPoP are always stripped before forwarding.
All inbound X-Obex-* headers are stripped to prevent injection.
Some MCP servers require the proxy to authenticate itself to the backend
(e.g. @benborla29/mcp-server-mysql with IS_REMOTE_MCP=true requires a
Authorization: Bearer <secret> on every request). Use backend.headers in
the Obex config — any format is supported:
backend:
url: "http://my-mcp-server:3000"
headers:
Authorization: "Bearer ${MY_SECRET}" # Bearer token
# X-API-Key: "${SOME_KEY}" # or an API key header
# X-Custom-Token: "literal-value" # or a static valueAll headers are injected after the client's Authorization header is stripped,
so there is no risk of the client's token reaching the backend.
This walkthrough shows how to place Obex in front of an existing Dockerized backend running on a local network, configure an IdP, wire everything together with docker-compose, and verify the setup end-to-end.
- Docker and Docker Compose installed
- An existing backend container (we'll use
my-backend:latestas an example) - An OAuth 2.0 Identity Provider (IdP) — e.g., Keycloak, Auth0, or any OIDC-compliant provider
Register a new OAuth 2.0 client in your IdP:
- Create a client (e.g.,
mcp-client) with grant typeauthorization_codeorclient_credentials. - Set the audience/resource to match your Obex
resource.url(e.g.,https://my-tool.example.com/mcp). - Define scopes that your backend needs (e.g.,
mcp:read,mcp:write). - Note the issuer URL (e.g.,
https://idp.example.com/realms/my-realm/) and JWKS URI (e.g.,https://idp.example.com/realms/my-realm/protocol/openid-connect/certs).
Create obex.yaml in your project directory:
obex:
public_url: "https://my-tool.example.com"
listen: ":8443"
tls:
enabled: false # terminate TLS at load-balancer or set true + provide certs
backend:
url: "http://backend:9090" # Docker Compose service name
timeout: "30s"
max_idle_conns: 10
limits:
max_request_body: "4MB"
read_timeout: "10s"
write_timeout: "30s"
idle_timeout: "120s"
drain_timeout: "5s"
rate_limit:
requests_per_second: 100
burst: 200
resource:
url: "https://my-tool.example.com/mcp"
scopes_supported:
- mcp:read
- mcp:write
idp:
issuer: "https://idp.example.com/realms/my-realm/"
jwks_uri: "https://idp.example.com/realms/my-realm/protocol/openid-connect/certs"
clock_skew: "10s"
accepted_algorithms:
- RS256
- ES256
authz:
required_scopes:
- mcp:readversion: "3.8"
services:
obex:
image: obex:dev
ports:
- "8443:8443"
volumes:
- ./obex.yaml:/etc/obex/obex.yaml:ro
depends_on:
- backend
healthcheck:
test: ["/obex", "--health-probe"]
interval: 15s
timeout: 5s
retries: 3
backend:
image: my-backend:latest
expose:
- "9090"
# Your backend's existing configuration:
# environment:
# - DATABASE_URL=...Key points:
obexis the only service that exposes a port to the host network.backendusesexpose(notports) so it is only reachable within the Docker network.- Obex reaches the backend via the Compose service name
backendon port9090.
# Build the Obex image (if not already built)
docker build -t obex:dev .
# Start the stack
docker compose up -dCheck health and readiness:
# Liveness — should return {"status":"ok"}
curl http://localhost:8443/health
# Readiness — should return {"status":"ready","checks":{"backend":"ok","jwks":"ok"}}
curl http://localhost:8443/readyCheck PRM discovery:
curl http://localhost:8443/.well-known/oauth-protected-resource
# Returns the Protected Resource Metadata JSON with your configured
# resource URL, authorization servers, and supported scopes.Verify auth enforcement (unauthenticated request):
curl -i http://localhost:8443/mcp/tools/call
# Should return 401 with WWW-Authenticate: Bearer realm="mcp", resource_metadata="..."Verify with a valid token:
# Obtain a token from your IdP (example using client_credentials):
TOKEN=$(curl -s -X POST https://idp.example.com/realms/my-realm/protocol/openid-connect/token \
-d grant_type=client_credentials \
-d client_id=mcp-client \
-d client_secret=YOUR_SECRET \
-d scope="mcp:read" | jq -r .access_token)
# Call the protected endpoint:
curl -i -H "Authorization: Bearer $TOKEN" http://localhost:8443/mcp/tools/call
# Should return 200 with the backend's response.
# The backend receives X-Obex-User, X-Obex-Scopes, X-Obex-Issuer headers.Verify body size limit:
# Oversized request should be rejected with 413:
dd if=/dev/zero bs=5M count=1 2>/dev/null | \
curl -i -X POST -H "Authorization: Bearer $TOKEN" \
--data-binary @- http://localhost:8443/mcp/tools/call
# Should return 413 Request Entity Too LargeAdd to obex.yaml:
dpop:
enabled: true
required: false # set to true to mandate DPoP on every request
proof_lifetime: "60s"
replay_cache:
backend: "memory" # or "redis" for multi-replica deployments
max_entries: 12000Restart: docker compose restart obex
When required: true, clients must include a DPoP header with a valid proof.
Verify with /ready — the response will include replay cache status when using Redis.
This section documents a complete Phase 3 test using an Okta Integrator Free account
as the IdP. Okta was chosen for this test because its free developer tier supports
DPoP-bound tokens (cnf.jkt) out of the box — the same feature works identically with
Keycloak, Authentik, or any IdP that issues RFC 9449-compliant DPoP-bound tokens.
The test script lives in test/dpop/test_dpop.py.
Standard Bearer tokens are reusable by anyone who intercepts them. DPoP (RFC 9449) binds the token to a specific keypair held by the client:
- The client generates an EC P-256 keypair once and keeps the private key.
- Every token request includes a signed
DPoPproof header. The IdP embeds the public key thumbprint (cnf.jkt) in the issued access token. - Every request to Obex includes both
Authorization: Bearer <token>and a freshDPoPproof signed with the same private key. - Obex verifies that the proof's key thumbprint matches
cnf.jktin the token, that the proof covers the correct HTTP method and URL (htm/htu), that it includes an access token hash (ath), and that thejtihas not been replayed.
A stolen Bearer token is useless without the private key.
Security → API → Authorization Servers → Add Authorization Server
- Name:
obextest(or any name) - Audience:
https://<SERVER_IP>/mcp— must matchresource.urlinobex.yaml - Description: optional
Note the Issuer URI shown on the server's Settings tab, e.g.:
https://your-tenant.okta.com/oauth2/obextest
On the auth server → Scopes tab → Add Scope:
- Name:
mcp:read - Display name:
MCP read access - Description:
Read access to MCP servers - User consent:
Implicit - Default scope: unchecked
- Metadata: unchecked
Applications → Applications → Create App Integration
- Sign-in method: API Services
- App name:
obex-dpop-test
From the app's General tab note:
- Client ID
- Client Secret (choose Client Secret authentication)
Still on the General tab → scroll to Proof of Possession:
- Check "Require Demonstrating Proof of Possession (DPoP) header in token requests"
- Save
Security → API → Authorization Servers → obextest → Access Policies tab → Add Policy:
- Name:
obex-dpop-test - Assign to: select the
obex-dpop-testapplication
Add a rule inside that policy:
- Name:
allow-client-credentials - Grant type: Client Credentials only
Add to your obex.yaml (or obex/mariadb.yaml):
idp:
issuer: "https://your-tenant.us.auth0.com/" # existing primary IdP
jwks_uri: "https://your-tenant.us.auth0.com/.well-known/jwks.json"
clock_skew: "10s"
accepted_algorithms: [RS256]
additional_issuers:
- issuer: "https://your-tenant.okta.com/oauth2/your-auth-server"
jwks_uri: "https://your-tenant.okta.com/oauth2/your-auth-server/v1/keys"
dpop:
enabled: true
required: false # true = reject requests without a DPoP proof
proof_lifetime: "60s"
replay_cache:
backend: "memory"
max_entries: 12000required: false keeps Auth0 Bearer-only tokens working alongside DPoP tokens.
Set to true when you want to mandate DPoP for all clients.
Restart after editing:
docker compose restart obex-mariadbThe test lives in its own venv:
cd /path/to/obex
python3 -m venv test/dpop/.venv
test/dpop/.venv/bin/pip install requests cryptographyEdit test/dpop/test_dpop.py — fill in the four variables at the top:
OKTA_DOMAIN = "your-tenant.okta.com"
CLIENT_ID = "<from Okta app General tab>"
CLIENT_SECRET = "<from Okta app General tab>"
OBEX_URL = "https://<SERVER_IP>:8443"Run:
test/dpop/.venv/bin/python3 test/dpop/test_dpop.py- Generates an EC P-256 keypair (
test/dpop/dpop_key.pem) on first run; reuses it on subsequent runs. - Gets a DPoP-bound token from Okta — Okta requires a nonce on the first attempt
and returns it in the
DPoP-Nonceresponse header; the script retries automatically with the nonce included in the proof. - Inspects the token — prints
token_type,scope,aud, andcnf.jktso you can verify binding before calling Obex. - Calls Obex with
Authorization: Bearer <token>+DPoP: <proof>(proof includesath= SHA-256 of the access token). - Reports pass/fail and prints the MariaDB MCP response.
Generated new DPoP key: test/dpop/dpop_key.pem
── Step 1: get DPoP-bound token from Okta ──
Okta requires nonce, retrying with: Nw00hieqzlb8OkLq7bXjfdyKiShiF7aO
token_type : DPoP
scope : mcp:read
expires_in : 3600s
aud : https://<SERVER_IP>/mcp
cnf.jkt : kK3FLL_gbVg6cW6yIsABPbxidEwe8dW4ScBZieNZzUI
── Step 2: call Obex with DPoP proof ──
HTTP 200
{"result":{"tools":[{"name":"mysql_query",...}]},"jsonrpc":"2.0","id":1}
Phase 3 DPoP test PASSED.
With required: true in the config, a plain Bearer request (no DPoP header) is rejected:
curl -ski -H "Authorization: Bearer sometoken" https://<SERVER_IP>:8443/mcpHTTP/2 401
www-authenticate: Bearer realm="mcp", ..., error="invalid_request"
{"error":"unauthorized","reason":"missing_dpop"}
The DPoP test script still passes because it always sends a valid proof.
| Symptom | Cause | Fix |
|---|---|---|
use_dpop_nonce on retry |
Script bug | Script handles this automatically |
cnf.jkt: (missing) |
DPoP not enabled on Okta app | App → General → Proof of Possession → enable |
audience_mismatch |
Okta audience ≠ resource.url |
Set auth server Audience to match exactly |
invalid_issuer |
Wrong issuer URL in additional_issuers |
Must match the Issuer URI on the auth server Settings tab |
invalid_dpop |
htu mismatch |
OBEX_URL + OBEX_PATH in script must match public_url + path in obex.yaml |
missing_dpop |
required: true but no proof sent |
Script always sends proof — check other clients |